10.2 Transformation of Initial Value Problems

We now discuss the application of Laplace transforms to solve a linear differential equation with constant coefficients, such as

ax(t)+bx(t)+cx(t)=f(t),
ax''(t)+bx'(t)+cx(t)=f(t),
(1)

with given initial conditions x(0)=x0x(0)=x0 and x(0)=x0.x'(0)=x'0. By the linearity of the Laplace transformation, we can transform Eq. (1) by separately taking the Laplace transform of each term in the equation. The transformed equation is

aL{x(t)}+bL{x(t)}+cL{x(t)}=L{f(t)};
aL{x''(t)}+bL{x'(t)}+cL{x(t)}=L{f(t)};
(2)

it involves the transforms of the derivatives xx' and xx'' of the unknown function x(t). The key to the method is Theorem 1, which tells us how to express the transform of the derivative of a function in terms of the transform of the function itself.

The function f is called piecewise smooth on the bounded interval [a, b] if it is piecewise continuous on [a, b] and differentiable except at finitely many points, with f(t)f'(t) being piecewise continuous on [a, b]. We may assign arbitrary values to f(t) at the isolated points at which f is not differentiable. We say that f is piecewise smooth for t0t0 if it is piecewise smooth on every bounded subinterval of [0,+).[0,+). Figure 10.2.1 indicates how “corners” on the graph of f correspond to discontinuities in its derivative ff'.

FIGURE 10.2.1.

The discontinuities of ff' correspond to “corners” on the graph of f.

The main idea of the proof of Theorem 1 is exhibited best by the case in which f(t)f'(t) is continuous (not merely piecewise continuous) for t0.t0. Then, beginning with the definition of L{f(t)}L{f'(t)} and integrating by parts, we get

L{f(t)}=0estf(t)dt=[estf(t)]t=0+s0estf(t)dt.
L{f'(t)}=0estf'(t)dt=[estf(t)]t=0+s0estf(t)dt.

Because of (3), the integrated term estf(t)estf(t) approaches zero (when s>cs>c) as t+,t+, and its value at the lower limit t=0t=0 contributes f(0)f(0) to the evaluation of the preceding expression. The integral that remains is simply L{f(t)};L{f(t)}; by Theorem 2 of Section 10.1, the integral converges when s>c.s>c. Then L{f(t)}L{f'(t)} exists when s>c,s>c, and its value is that given in Eq. (4). We will defer the case in which f(t)f'(t) has isolated discontinuities to the end of this section.

Solution of Initial Value Problems

In order to transform Eq. (1), we need the transform of the second derivative as well. If we assume that g(t)=f(t)g(t)=f'(t) satisfies the hypotheses of Theorem 1, then that theorem implies that

L{f(t)}=L{g(t)}=sL{g(t)}g(0)=sL{f(t)}f(0)=s[sL{f(t)}f(0)]f(0),
L{f''(t)}===L{g'(t)}=sL{g(t)}g(0)sL{f'(t)}f'(0)s[sL{f(t)}f(0)]f'(0),

and thus

L{f(t)}=s2F(s)sf(0)f(0).
L{f′′(t)}=s2F(s)sf(0)f'(0).
(5)

A repetition of this calculation gives

L{f(t)}=sL{f(t)}f(0)=s3F(s)s2f(0)sf(0)f(0).
L{f′′'(t)}=sL{f′′(t)}f′′(0)=s3F(s)s2f(0)sf'(0)f′′(0).
(6)

After finitely many such steps we obtain the following extension of Theorem 1.

Example 1

Solve the initial value problem

xx6x=0;x(0)=2,x(0)=1.
x''x'6x=0;x(0)=2,x'(0)=1.

Solution

With the given initial values, Eqs. (4) and (5) yield

L{x(t)}=sL{x(t)}x(0)=sX(s)2
L{x'(t)}=sL{x(t)}x(0)=sX(s)2

and

L{x(t)}=s2L{x(t)}sx(0)x(0)=s2X(s)2s+1,
L{x′′(t)}=s2L{x(t)}sx(0)x'(0)=s2X(s)2s+1,

where (according to our convention about notation) X(s) denotes the Laplace transform of the (unknown) function x(t). Hence the transformed equation is

[s2X(s)2s+1][sX(s)2]6[X(s)]=0,
[s2X(s)2s+1][sX(s)2]6[X(s)]=0,

which we quickly simplify to

(s2s6)X(s)2s+3=0.
(s2s6)X(s)2s+3=0.

Thus

X(s)=2s3s2s6=2s3(s3)(s+2).
X(s)=2s3s2s6=2s3(s3)(s+2).

By the method of partial fractions (of integral calculus), there exist constants A and B such that

2s3(s3)(s+2)=As3+Bs+2,
2s3(s3)(s+2)=As3+Bs+2,

and multiplication of both sides of this equation by (s3)(s+2)(s3)(s+2) yields the identity

2s3=A(s+2)+B(s3).
2s3=A(s+2)+B(s3).

If we substitute s=3,s=3, we find that A=35;A=35; substitution of s=2s=2 shows that B=75.B=75. Hence

X(s)=L{x(t)}=35s3+75s+2.
X(s)=L{x(t)}=35s3+75s+2.

Because L1{1/(sa)}=eat,L1{1/(sa)}=eat, it follows that

x(t)=35e3t+75e2t
x(t)=35e3t+75e2t

is the solution of the original initial value problem. Note that we did not first find the general solution of the differential equation. The Laplace transform method directly yields the desired particular solution, automatically taking into account—via Theorem 1 and its corollary—the given initial conditions.

Remark

In Example 1 we found the values of the partial-fraction coefficients A and B by the “trick” of separately substituting the roots s=3s=3 and s=2s=2 of the original denominator s2s6=(s3)(s+2)s2s6=(s3)(s+2) into the equation

2s3=A(s+2)+B(s3)
2s3=A(s+2)+B(s3)

that resulted from clearing fractions. In lieu of any such shortcut, the “sure-fire” method is to collect coefficients of powers of s on the right-hand side,

2s3=(A+B)s+(2A3).
2s3=(A+B)s+(2A3).

Then upon equating coefficients of terms of like degree, we get the linear equations

A+B=2,2A3B=3,
A2A+B3B==2,3,

which are readily solved for the same values A=35A=35 and B=75B=75.

Example 2

Forced mass-spring system Solve the initial value problem

x+4x=sin 3t;x(0)=x(0)=0.
x''+4x=sin 3t;x(0)=x'(0)=0.

Such a problem arises in the motion of a mass-and-spring system with external force, as shown in Fig. 10.2.2.

Solution

Because both initial values are zero, Eq. (5) yields L{x(t)}=s2X(s).L{x''(t)}=s2X(s). We read the transform of sin 3tsin 3t from the table in Fig. 10.1.2 (Section 10.1) and thereby get the transformed equation

s2X(s)+4X(s)=3s2+9.
s2X(s)+4X(s)=3s2+9.

FIGURE 10.2.2.

A mass–and–spring system satisfying the initial value problem in Example 2. The mass is initially at rest in its equilibrium position.

Therefore,

X(s)=3(s2+4)(s2+9).
X(s)=3(s2+4)(s2+9).

The method of partial fractions calls for

3(s2+4)(s2+9)=As+Bs2+4+Cs+Ds2+9.
3(s2+4)(s2+9)=As+Bs2+4+Cs+Ds2+9.

The sure-fire approach would be to clear fractions by multiplying both sides by the common denominator, and then collect coefficients of powers of s on the right-hand side. Equating coefficients of like powers on the two sides of the resulting equation would then yield four linear equations that we could solve for A, B, C, and D.

However, here we can anticipate that A=C=0,A=C=0, because neither the numerator nor the denominator on the left involves any odd powers of s, whereas nonzero values for A or C would lead to odd-degree terms on the right. So we replace A and C with zero before clearing fractions. The result is the identity

3=B(s2+9)+D(s2+4)=(B+D)s2+(9B+4D).
3=B(s2+9)+D(s2+4)=(B+D)s2+(9B+4D).

When we equate coefficients of like powers of s we get the linear equations

B+D=0,9B+4D=3,
B+D9B+4D==0,3,

which are readily solved for B=35B=35 and D=35.D=35. Hence

X(s)=L{x(t)}=3102s2+4153s2+9.
X(s)=L{x(t)}=3102s2+4153s2+9.

Because L{sin 2t}=2/(s2+4)L{sin 2t}=2/(s2+4) and L{sin 3t}=3/(s2+9),L{sin 3t}=3/(s2+9), it follows that

x(t)=310sin 2t15sin 3t.
x(t)=310sin 2t15sin 3t.

Figure 10.2.3 shows the graph of this period 2π2π position function of the mass. Note that the Laplace transform method again gives the solution directly, without the necessity of first finding the complementary function and a particular solution of the original nonhomogeneous differential equation. Thus nonhomogeneous equations are solved in exactly the same manner as are homogeneous equations.

FIGURE 10.2.3.

The position function x(t) in Example 2.

Examples 1 and 2 illustrate the solution procedure that is outlined in Fig. 10.2.4.

FIGURE 10.2.4.

Using the Laplace transform to solve an initial value problem.

Linear Systems

Laplace transforms are used frequently in engineering problems to solve linear systems in which the coefficients are all constants. When initial conditions are specified, the Laplace transform reduces such a linear system of differential equations to a linear system of algebraic equations in which the unknowns are the transforms of the solution functions. As Example 3 illustrates, the technique for a system is essentially the same as for a single linear differential equation with constant coefficients.

Example 3

Dual mass-spring system Solve the system

2x=6x+2y,y=2x2y+40 sin 3t,
2x''y''==6x+2y,2x2y+40 sin 3t,
(8)

subject to the initial conditions

x(0)=x(0)=y(0)=y(0)=0.
x(0)=x'(0)=y(0)=y'(0)=0.
(9)

Thus the force f(t)=40 sin 3tf(t)=40 sin 3t is applied to the second mass of Fig. 10.2.5, beginning at time t=0t=0 when the system is at rest in its equilibrium position.

FIGURE 10.2.5.

A mass–and–spring system satisfying the initial value problem in Example 3. Both masses are initially at rest in their equilibrium positions.

Solution

We write X(s)=L{x(t)}X(s)=L{x(t)} and Y(s)=L{y(t)}.Y(s)=L{y(t)}. Then the initial conditions in (9) imply that

L{x(t)}=s2X(s)andL{y(t)}=s2Y(s).
L{x′′(t)}=s2X(s)andL{y′′(t)}=s2Y(s).

Because L{sin 3t}=3/(s2+9),L{sin 3t}=3/(s2+9), the transforms of the equations in (8) are the equations

2s2X(s)=6X(s)+2Y(s),s2Y(s)=2X(s)2Y(s)+120s2+9.
2s2X(s)s2Y(s)==6X(s)+2Y(s),2X(s)2Y(s)+120s2+9.

Thus the transformed system is

(s2+3)X(s)Y(s)=0,2X(s)+(s2+2)Y(s)=120s2+9.
(s2+3)X(s)2X(s)+Y(s)(s2+2)Y(s)==0,120s2+9.
(10)

The determinant of this pair of linear equations in X(s) and Y(s) is

|s2+312s2+2|=(s2+3)(s2+2)2=(s2+1)(s2+4),
s2+321s2+2=(s2+3)(s2+2)2=(s2+1)(s2+4),

and we readily solve—using Cramer’s rule, for instance—the system in (10) for

X(s)=120(s2+1)(s2+4)(s2+9)=5s2+18s2+4+3s2+9
X(s)=120(s2+1)(s2+4)(s2+9)=5s2+18s2+4+3s2+9
(11a)

and

Y(s)=120(s2+3)(s2+1)(s2+4)(s2+9)=10s2+1+8s2+418s2+9.
Y(s)=120(s2+3)(s2+1)(s2+4)(s2+9)=10s2+1+8s2+418s2+9.
(11b)

The partial fraction decompositions in Eqs. (11a) and (11b) are readily found using the method of Example 2. For instance, noting that the denominator factors are linear in s2,s2, we can write

120(s2+1)(s2+4)(s2+9)=As2+1+Bs2+4+Cs2+9,
120(s2+1)(s2+4)(s2+9)=As2+1+Bs2+4+Cs2+9,

and it follows that

120=A(s2+4)(s2+9)+B(s2+1)(s2+9)+C(s2+1)(s2+4).
120=A(s2+4)(s2+9)+B(s2+1)(s2+9)+C(s2+1)(s2+4).
(12)

Substitution of s2=1s2=1 (that is, s=i,s=i, a zero of the factor s2+1s2+1) in Eq. (12) gives 120=A·3·8,120=A38, so A=5.A=5. Similarly, substitution of s2=4s2=4 in Eq. (12) yields B=8,B=8, and substitution of s2=9s2=9 yields C=3.C=3. Thus we obtain the partial fraction decomposition shown in Eq. (11a).

At any rate, the inverse Laplace transforms of the expressions in Eqs. (11a) and (11b) give the solution

x(t)=5 sint4 sin 2t+sin 3t,y(t)=10 sint+4 sin 2t6 sin 3t.
x(t)y(t)==5 sint10 sint+4 sin 2t4 sin 2t+sin 3t,6 sin 3t.

Figure 10.2.6 shows the graphs of these two period 2π2π position functions of the two masses.

FIGURE 10.2.6.

The position functions x(t) and y(t) in Example 3.

The Transform Perspective

Let us regard the general constant-coefficient second-order equation as the equation of motion

mx+cx+kx=f(t)
mx′′+cx'+kx=f(t)

of the familiar mass–spring–dashpot system (Fig. 10.2.7). Then the transformed equation is

FIGURE 10.2.7.

A mass–spring–dashpot system with external force f(t).

m[s2X(s)sx(0)x(0)]+c[sX(s)x(0)]+kX(s)=F(s).
m[s2X(s)sx(0)x'(0)]+c[sX(s)x(0)]+kX(s)=F(s).
(13)

Note that Eq. (13) is an algebraic equation—indeed, a linear equation—in the “unknown” X(s). This is the source of the power of the Laplace transform method:

Linear differential equations are transformedinto readily solved algebraic equations.
Linear differential equations are transformedinto readily solved algebraic equations.

If we solve Eq. (13) for X(s), we get

X(s)=F(s)Z(s)+I(s)Z(s),
X(s)=F(s)Z(s)+I(s)Z(s),
(14)

where

Z(s)=ms2+cs+kandI(s)=mx(0)s+mx(0)+cx(0).
Z(s)=ms2+cs+kandI(s)=mx(0)s+mx'(0)+cx(0).

Note that Z(s) depends only on the physical system itself. Thus Eq. (14) presents X(s)=L{x(t)}X(s)=L{x(t)} as the sum of a term depending only on the external force and one depending only on the initial conditions. In the case of an underdamped system, these two terms are the transforms

L{xsp(t)}=F(s)Z(s)andL{xtr(t)}=I(s)Z(s)
L{xsp(t)}=F(s)Z(s)andL{xtr(t)}=I(s)Z(s)

of the steady periodic solution and the transient solution, respectively. The only potential difficulty in finding these solutions is in finding the inverse Laplace transform of the right-hand side in Eq. (14). Much of the remainder of this chapter is devoted to finding Laplace transforms and inverse transforms. In particular, we seek those methods that are sufficiently powerful to enable us to solve problems that—unlike those in Examples 1 and 2—cannot be solved readily by the methods of Chapter 5.

Additional Transform Techniques

Example 4

Show that

L{teat}=1(sa)2.
L{teat}=1(sa)2.

Solution

If f(t)=teat,f(t)=teat, then f(0)=0f(0)=0 and f(t)=eat+ateat.f'(t)=eat+ateat. Hence Theorem 1 gives

L{eat+ateat}=L{f(t)}=sL{f(t)}=sL{teat}.
L{eat+ateat}=L{f'(t)}=sL{f(t)}=sL{teat}.

It follows from the linearity of the transform that

L{eat}+aL{teat}=sL{teat}.
L{eat}+aL{teat}=sL{teat}.

Hence

L{teat}=L{eat}sa=1(sa)2
L{teat}=L{eat}sa=1(sa)2
(15)

because L{eat}=1/(sa)L{eat}=1/(sa).

Example 5

Find L{tsin kt}L{tsin kt}.

Solution

Let f(t)=tsin kt.f(t)=tsin kt. Then f(0)=0f(0)=0 and

f(t)=sinkt+ktcoskt.
f'(t)=sinkt+ktcoskt.

The derivative involves the new function tcos kt,tcos kt, so we note that f(0)=0f'(0)=0 and differentiate again. The result is

f(t)=2k cos ktk2t sin kt.
f′′(t)=2k cos ktk2t sin kt.

But L{f(t)}=s2L{f(t)}L{f′′(t)}=s2L{f(t)} by the formula in (5) for the transform of the second derivative, and L{cos kt}=s/(s2+k2),L{cos kt}=s/(s2+k2), so we have

2kss2+k2k2L{tsinkt}=s2L{tsinkt}.
2kss2+k2k2L{tsinkt}=s2L{tsinkt}.

Finally, we solve this equation for

L{tsinkt}=2ks(s2+k2)2.
L{tsinkt}=2ks(s2+k2)2.
(16)

This procedure is considerably more pleasant than the alternative of evaluating the integral

L{tsinkt}=0testsinktdt.
L{tsinkt}=0testsinktdt.

Examples 4 and 5 exploit the fact that if f(0)=0,f(0)=0, then differentiation of f corresponds to multiplication of its transform by s. It is reasonable to expect the inverse operation of integration (antidifferentiation) to correspond to division of the transform by s.

Proof:

Because f is piecewise continuous, the fundamental theorem of calculus implies that

g(t)=t0f(τ) dτ
g(t)=t0f(τ) dτ

is continuous and that g(t)=f(t)g'(t)=f(t) where f is continuous; thus g is continuous and piecewise smooth for t0.t0. Furthermore,

|g(t)|t0|f(τ)|dτMt0ecτdτ=Mc(ect1)<Mcect,
|g(t)|t0|f(τ)|dτMt0ecτdτ=Mc(ect1)<Mcect,

so g(t) is of exponential order as t+.t+. Hence we can apply Theorem 1 to g; this gives

L{f(t)}=L{g(t)}=sL{g(t)}g(0).
L{f(t)}=L{g'(t)}=sL{g(t)}g(0).

Now g(0)=0,g(0)=0, so division by s yields

L{t0f(τ)dτ}=L{g(t)}=L{f(t)}s,
L{t0f(τ)dτ}=L{g(t)}=L{f(t)}s,

which completes the proof.

Example 6

Find the inverse Laplace transform of

G(s)=1s2(sa).
G(s)=1s2(sa).

Solution

In effect, Eq. (18) means that we can delete a factor of s from the denominator, find the inverse transform of the resulting simpler expression, and finally integrate from 0 to t (to “correct” for the missing factor s). Thus

L1{1s(sa)}=t0L1{1sa}dτ=t0eaτdτ=1a(eat1).
L1{1s(sa)}=t0L1{1sa}dτ=t0eaτdτ=1a(eat1).

We now repeat the technique to obtain

L1{1s2(sa)}=t0L1{1s(sa)}dτ=t01a(eaτ1)dτ=[1a(1aeaττ)]t0=1a2(eatat1).
L1{1s2(sa)}==t0L1{1s(sa)}dτ=t01a(eaτ1)dτ[1a(1aeaττ)]t0=1a2(eatat1).

This technique is often a more convenient way than the method of partial fractions for finding an inverse transform of a fraction of the form P(s)/[snQ(s)]P(s)/[snQ(s)].

Proof of Theorem 1:

We conclude this section with the proof of Theorem 1 in the general case in which ff' is merely piecewise continuous. We need to prove that the limit

limbb0estf(t)dt
limbb0estf'(t)dt

exists and also need to find its value. With b fixed, let t1,t2,,tk1t1,t2,,tk1 be the points interior to the interval [0,b][0,b] at which ff' is discontinuous. Let t0=0t0=0 and tk=b.tk=b. Then we can integrate by parts on each interval (tn1,tn)(tn1,tn) where ff' is continuous. This yields

b0estf(t)dt=kn=1tntn1estf(t)dt=kn=1[estf(t)]tntn1+kn=1stntn1estf(t)dt.
b0estf'(t)dt==n=1ktntn1estf'(t)dtn=1k[estf(t)]tntn1+n=1kstntn1estf(t)dt.
(19)

Now the first summation

kn=1[estf(t)]tntn1=[f(t0)+est1f(t1)]+[est1f(t1)+est2f(t2)]++[estk2f(tk2)+estk1f(tk1)]+[estk1f(tk1)+estkf(tk)]
n=1k[estf(t)]tntn1=[f(t0)+est1f(t1)]+[est1f(t1)+est2f(t2)]++[estk2f(tk2)+estk1f(tk1)]+[estk1f(tk1)+estkf(tk)]
(20)

in (19) telescopes down to f(t0)+estkf(tk)=f(0)+esbf(b),f(t0)+estkf(tk)=f(0)+esbf(b), and the second summation adds up to s times the integral from t0=0t0=0 to tk=b.tk=b. Therefore (19) reduces to

b0estf(t)dt=f(0)+esbf(b)+sb0estf(t)dt.
b0estf'(t)dt=f(0)+esbf(b)+sb0estf(t)dt.

But from Eq. (3) we get

|esbf(b)|esbMecb=Meb(sc)0
esbf(b)esbMecb=Meb(sc)0

if s>c.s>c. Therefore, finally taking limits (with s fixed) as b+b+ in the preceding equation, we get the desired result

L{f(t)}=sL{f(t)}f(0).
L{f'(t)}=sL{f(t)}f(0).

Extension of Theorem 1

Now suppose that the function f is only piecewise continuous (instead of continuous), and let t1,t2,t3,t1,t2,t3, be the points (for t>0t>0) where either f or ff' is discontinuous. The fact that f is piecewise continuous includes the assumption that—within each interval [tn1,tn][tn1,tn] between successive points of discontinuity—f agrees with a function that is continuous on the whole closed interval and has “endpoint values”

f(t+n1)=limtt+n1f(t)andf(tn)=limttnf(t)
f(t+n1)=limtt+n1f(t)andf(tn)=limttnf(t)

that may not agree with the actual values f(tn1)f(tn1) and f(tn).f(tn). The value of an integral on an interval is not affected by changing the values of the integrand at the endpoints. However, if the fundamental theorem of calculus is applied to find the value of the integral, then the antiderivative function must be continuous on the closed interval. We therefore use the “continuous from within the interval” endpoint values above in evaluating (by parts) the integrals on the right in (19). The result is

kn=1[estf(t)]tntn1=[f(t+0)+est1f(t1)]+[est1f(t+1)+est2f(t2)]++[estk2f(t+k2)+estk1f(tk1)]+[estk1f(t+k1)+estkf(tk)]=f(0+)k1n=1jf(tn)+esbf(b),
n=1k[estf(t)]tntn1==[f(t+0)+est1f(t1)]+[est1f(t+1)+est2f(t2)]++[estk2f(t+k2)+estk1f(tk1)]+[estk1f(t+k1)+estkf(tk)]f(0+)n=1k1jf(tn)+esbf(b),
(20′)

where

jf(tn)=f(t+n)f(tn)
jf(tn)=f(t+n)f(tn)
(21)

denotes the (finite) jump in f(t) at t=tn.t=tn. Assuming that L{f(t)}L{f'(t)} exists, we therefore get the generalization

L{f(t)}=sF(s)f(0+)n=1estnjf(tn)
L{f'(t)}=sF(s)f(0+)n=1estnjf(tn)
(22)

of L{f(t)}=sF(s)f(0)L{f'(t)}=sF(s)f(0) when we now take the limit in (19) as b+b+.

Example 7

Let f(t)=1+tf(t)=1+t be the unit staircase function; its graph is shown in Fig. 10.2.8. Then f(0)=1, f(t)0,f(0)=1, f'(t)0, and jf(n)=1jf(n)=1 for each integer n=1, 2, 3,.n=1, 2, 3,. Hence Eq. (22) yields

FIGURE 10.2.8.

The graph of the unit staircase function of Example 7.

0=sF(s)1n=1ens,
0=sF(s)1n=1ens,

so the Laplace transform of f(t) is

F(s)=1sn=0ens=1s(1es).
F(s)=1sn=0ens=1s(1es).

In the last step we used the formula for the sum of a geometric series,

n=0xn=11x,
n=0xn=11x,

with x=es<1x=es<1.

10.2 Problems

Use Laplace transforms to solve the initial value problems in Problems 1 through 16.

  1. x+4x=0x(0)=5, x(0)=0x′′+4x=0x(0)=5, x'(0)=0

     

  2. x+9x=0x(0)=3, x(0)=4x′′+9x=0x(0)=3, x'(0)=4

     

  3. xx2x=0x(0)=0, x(0)=2x′′x'2x=0x(0)=0, x'(0)=2

     

  4. x+8x+15x=0; x(0)=2, x(0)=3x′′+8x'+15x=0; x(0)=2, x'(0)=3

     

  5. x+x=sin 2tx(0)=0=x(0)x′′+x=sin 2tx(0)=0=x'(0)

     

  6. x+4x=cos tx(0)=0=x(0)x′′+4x=cos tx(0)=0=x'(0)

     

  7. x+x=cos 3t; x(0)=1, x(0)=0x′′+x=cos 3t; x(0)=1, x'(0)=0

     

  8. x+9x=1x(0)=0=x(0)x′′+9x=1x(0)=0=x'(0)

     

  9. x+4x+3x=1; x(0)=0=x(0)x′′+4x'+3x=1; x(0)=0=x'(0)

     

  10. x+3x+2x=t; x(0)=0, x(0)=2x′′+3x'+2x=t; x(0)=0, x'(0)=2

     

  11. x=2x+y, y=6x+3y; x(0)=1, y(0)=2x'=2x+y, y'=6x+3y; x(0)=1, y(0)=2

     

  12. x=x+2y, y=x+et; x(0)=y(0)=0x'=x+2y, y'=x+et; x(0)=y(0)=0

     

  13. x+2y+x=0, xy+y=0; x(0)=0, y(0)=1x'+2y'+x=0, x'y'+y=0; x(0)=0, y(0)=1

     

  14. x+2x+4y=0, y+x+2y=0; x(0)=y(0)=0, x(0)=y(0)=1x′′+2x+4y=0, y′′+x+2y=0; x(0)=y(0)=0, x'(0)=y'(0)=1

     

  15. x+x+y+2xy=0, y+x+y+4x2y=0; x(0)=y(0)=1, x(0)=y(0)=0x′′+x'+y'+2xy=0, y′′+x'+y'+4x2y=0; x(0)=y(0)=1, x'(0)=y'(0)=0

     

  16. x=x+z, y=x+y, z=2xz; x(0)=1, y(0)=0; z(0)=0x'=x+z, y'=x+y, z'=2xz; x(0)=1, y(0)=0; z(0)=0

Apply Theorem 2 to find the inverse Laplace transforms of the functions in Problems 17 through 24.

  1. F(s)=1s(s3)F(s)=1s(s3)

     

  2. F(s)=3s(s+5)F(s)=3s(s+5)

     

  3. F(s)=1s(s2+4)F(s)=1s(s2+4)

     

  4. F(s)=2s+1s(s2+9)F(s)=2s+1s(s2+9)

     

  5. F(s)=1s2(s2+1)F(s)=1s2(s2+1)

     

  6. F(s)=1s(s29)F(s)=1s(s29)

     

  7. F(s)=1s2(s21)F(s)=1s2(s21)

     

  8. F(s)=1s(s+1)(s+2)F(s)=1s(s+1)(s+2)

     

  9. Apply Theorem 1 to derive L{sin kt}L{sin kt} from the formula for L{cos kt}L{cos kt}.

  10. Apply Theorem 1 to derive L{coshkt}L{coshkt} from the formula for L{sinhkt}L{sinhkt}.

    1. Apply Theorem 1 to show that

      L{tneat}=nsaL{tn1eat}.
      L{tneat}=nsaL{tn1eat}.
    2. Deduce that L{tneat}=n!/(sa)n+1L{tneat}=n!/(sa)n+1 for n=1, 2, 3,.n=1, 2, 3,.

Apply Theorem 1 as in Example 5 to derive the Laplace transforms in Problems 28 through 30.

  1. L{tcos kt}=s2k2(s2+k2)2L{tcos kt}=s2k2(s2+k2)2

     

  2. L{tsinhkt}=2ks(s2k2)2L{tsinhkt}=2ks(s2k2)2

     

  3. L{tcoshkt}=s2+k2(s2k2)2L{tcoshkt}=s2+k2(s2k2)2

     

  4. Apply the results in Example 5 and Problem 28 to show that

    L1{1(s2+k2)2}=12k3(sinktktcoskt).
    L1{1(s2+k2)2}=12k3(sinktktcoskt).

Apply the extension of Theorem 1 in Eq. (22) to derive the Laplace transforms given in Problems 32 through 37.

  1. L{u(ta)}=s1easL{u(ta)}=s1eas for a>0a>0.

  2. If f(t)=1f(t)=1 on the interval [a, b] (where 0<a<b0<a<b) and f(t)=0f(t)=0 otherwise, then

    L{f(t)}=easebss.
    L{f(t)}=easebss.
  3. If f(t)=(1)tf(t)=(1)t is the square-wave function whose graph is shown in Fig. 10.2.9, then

    L{f(t)}=1stanhs2.
    L{f(t)}=1stanhs2.

    (Suggestion: Use the geometric series.)

    FIGURE 10.2.9.

    The graph of the square-wave function of Problem 34.

  4. If f(t) is the unit on—off function whose graph is shown in Fig. 10.2.10, then

    L{f(t)}=1s(1+es).
    L{f(t)}=1s(1+es).

    FIGURE 10.2.10.

    The graph of the on–off function of Problem 35.

  5. If g(t) is the triangular wave function whose graph is shown in Fig. 10.2.11, then

    L{g(t)}=1s2tanhs2.
    L{g(t)}=1s2tanhs2.

    FIGURE 10.2.11.

    The graph of the triangular wave function of Problem 36.

  6. If f(t) is the sawtooth function whose graph is shown in Fig. 10.2.12, then

    L{f(t)}=1s2ess(1es).
    L{f(t)}=1s2ess(1es).

    (Suggestion: Note that f(t)1f'(t)1 where it is defined.)

    FIGURE 10.2.12.

    The graph of the sawtooth function of Problem 37.

10.2 Application Transforms of Initial Value Problems

The typical computer algebra system knows Theorem 1 and its corollary, hence can transform not only functions (as in the Section 10.1 application), but also entire initial value problems. We illustrate the technique here with Mathematica and in the Section 10.3 application with Maple. Consider the initial value problem

x+4x=sin 3t,x(0)=x(0)=0
x′′+4x=sin 3t,x(0)=x'(0)=0

of Example 2. First we define the differential equation with its initial conditions, then load the Laplace transform package.

de = x″ [t] + 4* x[t] == Sin[3* t]
inits = {x[0] >> 0,x >> 0}

The Laplace transform of the differential equation is given by

DE = LaplaceTransform[ de, t, s ]

The result of this command—which we do not show explicitly here—is a linear (algebraic) equation in the as yet unknown LaplaceTransform[x[t],t,s]. We proceed to solve for this transform X(s) of the unknown function x(t) and substitute the initial conditions.

X = Solve[DE, LaplaceTransform[x[t],t,s]]
X = X // Last // Last // Last
X = X /. inits
3(s2+4)(s2+9)
3(s2+4)(s2+9)

Finally we need only compute an inverse transform to find x(t).

x = InverseLaplaceTransform[X,s,t]
15(3cos(t)sin(t)sin(3t))
15(3cos(t)sin(t)sin(3t))
x /. {Cos[t] Sin[t] >> 1/2 Sin[2t]}// Expand
310sin(2t)15sin(3t)
310sin(2t)15sin(3t)

Of course we could probably get this result immediately with DSolve, but the intermediate output generated by the steps shown here can be quite instructive. You can try it for yourself with the initial value problems in Problems 1 through 16.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.96.102