CHAPTER 1

FIRST-ORDER SCALAR EQUATIONS

In this chapter we study the basic properties of first-order scalar ordinary differential equations and their solutions. The first and larger part is devoted to linear equations and various of their basic properties, such as the principle of superposition, Duhamel’s principle, and the concept of stability. In the second part we study briefly nonlinear scalar equations, emphasizing the new behaviors that emerge, and introduce the very useful technique known as the principle of linearization. The scalar equations and their properties are crucial to an understanding of the behavior of more general differential equations.

1.1 Constant coefficient linear equations

Consider a complex function y of a real variable t. One of the simplest differential equations that y can obey is given by

(1.1) equation

where λ is constant. We want to solve the initial value problem, that is, we want to determine a solution for t ≥ 0 with given initial value

(1.2) equation

Clearly,

(1.3) equation

is the solution of (1.1), (1.2). Let us discuss the solution under different assumptions for the λ constant. In Figures 1.1 and 1.2 we illustrate the solution for y0 = 1+0.4i and different values of λ.

Figure 1.1 Exponentially decaying solutions. Re y shown as solid lines and Im y as dashed lines.

Figure 1.2 Exponentially growing solutions. Re y is shown as solid line and Im y as a dashed line.

1. λ , λ < 0. In this case both the real and imaginary parts of the solution decay exponentially. If |λ| 1, the decay is very rapid.
2. λ , λ > 0. The solution grows exponentially. The growth is slow if |λ| 1. For example, for λ = 0.01 we have, by Taylor expansion,

equation

On the other hand, if λ 1, the solution grows very rapidly.
3. λ = i, ξ . In this case, the amplitude |y(t)| of the solution is constant in time,

equation

If the complex initial data y0 is written as

equation

the solution is

equation

which defines the real part y(t) and imaginary part y(t) of the solution. Both parts are oscillatory functions of t. The solution is highly oscillatory if |ξ| 1. Figure 1.3 shows the solution for λ = 2i and y0 = 1 + 0.4i. Another representation of the solution is obtained if we write the initial data in amplitude-phase form,

Figure 1.3 Oscillatory solution. Re y is shown as a solid line and Im y as a dashed line.

equation

One calls the modulus |y0| the amplitude of y0 and the principal argument α the phase of y0. The solution becomes

(1.4) equation

The real part of the solution with λ = i, |y0| = 1, and α = −π/4 is shown in Figure 1.4.

Figure 1.4 Real part of the solution (1.4).

equation

4. The general case. Let

equation

The solution is given by

equation

thus,

equation

Therefore, depending on the sign of η, the amplitude |y(t)| of the solution grows, decays, or remains constant. The phase ξt + η is a linear function of t and changes rapidly if |ξ| is large.

Next, consider the inhomogeneous problem

(1.5) equation

where λ, a, μ, and y0 are complex constants. Regardless of the initial condition at first, we look for a particular solution of the form

(1.6) equation

Introducing (1.6) into the differential equation (1.5) gives us

equation

that is,

equation

If μ ≠ λ we obtain the particular solution

equation

On the other hand, if μ = λ, the procedure above is not successful and we try to find a solution of the form1

(1.7) equation

Introducing (1.7) into the differential equation gives us

equation

The last equation is satisfied if we choose A = a; recall that λ = μ by assumption. Let us summarize our results.

Lemma 1.1 The function

equation

is a solution of the differential equation dy/dt = λy + aeμt.

Note that the particular solution yP(t) does not adjust, in general, to the initial data given (i.e., yP(0) ≠ y0). The initial value problem (1.5) can now be solved in the following way. We introduce the dependent variable u by

equation

Initial value problem (1.5) yields

equation

and, since dyP/dt = λyP + aeμt, we obtain

equation

Thus, u(t) satisfies the corresponding homogeneous differential equation, and (1.3) yields

equation

The complete solution

equation

consists of two parts, yP(t) and u(t). The function yP(t) is also called the forced solution because it has essentially the same behavior as that of the forcing aeμt. The other part, u(t), is often called the transient solution since it converges to zero for t → ∞ if Re λ < 0.

Finally, we want to show how we can solve the initial value problem

(1.8) equation

with a general forcing F(t). We can solve this problem by applying a procedure known as Duhamel’s principle.

1.1.1 Duhamel’s principle

Lemma 1.2 The solution of (1.8) is given by

(1.9) equation

Proof: Define y(t) by formula (1.9). Clearly, y(0) = y0 (i.e., the initial condition is satisfied). Also, y(t) is a solution of the differential equation, because

equation

This proves the lemma.

Exercise 1.1 Prove that the solution (1.9) is the unique solution of (1.8).

We shall now discuss the relation between the solution to inhomogeneous equation (1.8) and the homogeneous equation

(1.10) equation

We consider (1.10) with initial condition u = u(s) at a time s > 0. At a later time ts the solution is

equation

Thus, eλ(t-s) is a factor that connects u(t) with u(s). We will call it the solution operator and use the notation

(1.11) equation

The solution operator has the following properties:

(1.12) equation

Now we can show that the solution of inhomogeneous equation (1.8) can be expressed in terms of the solution of homogeneous equation (1.10). Then (1.9) becomes

(1.13) equation

In a somewhat loose way, we may consider the integral as a “sum” of many terms S(t, sj)F(sjs; think of approximating the integral by a Riemann sum. Then (1.13) expresses the solution of inhomogeneous problem (1.8) as a weighted superposition of solutions tS(t, s) of homogeneous equation (1.10). The idea of expressing the solution of an inhomogeneous problem via solutions of the homogeneous equation is very useful. As we will see, it generalizes to systems of equations, to partial differential equations, and also to difference approximations. It is known as Duhamel’s principle.

Exercise 1.2 Use Duhamel’s principle to derive a representation for the solution of

equation

Exercise 1.3 Consider the inhomogeneous initial value problem

equation

where Pn(t) is a polynomial of degree n with complex coefficients. Show that the solution to the problem is of the form

equation

where Qm(t) is a polynomial of degree m with m = n in the nonresonance case (μ ≠ λ) and m = n + 1 in the resonance case (μ = λ). Determine 0 in each case.

We now want to consider scalar equations with smooth variable coefficients, which leads to the next principle.

1.1.2 Principle of frozen coefficients

In many applications the problem with smooth variable coefficients can be localized, that is, it can be decomposed in many constant coefficient problems (by using a partition of unity). Then by solving all these constant coefficient problems, one can construct an approximate solution to the original variable coefficient problem. The approximation can be as good as one wants. The general theory concludes that if all relevant constant coefficient problems have a solution, the variable coefficient problem also has a solution. This procedure is known as the principle of frozen coefficients. We do not go into it more deeply here.

1.2 Variable coefficient linear equations

1.2.1 Principle of superposition

The initial value problem

(1.14) equation

is an example of a linear problem. It has the following properties:

1. Let y(t) be a solution of (1.14). Let σ be a constant and replace F(t) and y0 by σF(t) and σy0, respectively. In other words, consider the new problem

(1.15) equation

Multiplying (1.14) by σ gives us

equation

Thus, (1.15) solves the new problem and, using uniqueness, the solution (1.15) is

equation

2. Consider (1.14) with a set of two forcing functions F1 (t), F2 (t) and two initial data y01, y02. Denote the resulting solutions by y1(t), y2(t), respectively:

equation

Adding the equations, we find that

equation

Thus, the sum

equation

is a solution of

equation

To summarize, for a linear problem such as (1.14), we can use superposition of solutions to obtain new solutions. This property of linear systems, known as the superposition principle, can be used to compose solutions with complicated forcing functions out of solutions of simpler problems. Consider, for example,

(1.16) equation

Since

equation

consists of three terms, we solve three problems:

equation

We do not yet impose initial conditions, so that we can choose simple particular solutions. By Lemma 1.1, particular solutions of the equations above are

equation

Using the superposition principle, we find that

equation

solves

equation

Clearly,

equation

Therefore, the solution of (1.16) is given by

equation

where σ is determined by the initial condition

equation

The superposition principle relies only on linearity; it holds for any linear equation or system of linear equations, both ordinary and partial differential equations. An equation is linear if the dependent variable and its derivatives appear linearly only (i.e., as the first power), in the equation.

Exercise 1.4 Solve the initial value problem

equation

1.2.2 Duhamel’s principle for variable coefficients

We want to discuss now the solution of problem (1.14) in terms of Duhamel’s principle. To this end we discuss the solution operator in a more abstract setting.

Consider first an initial value problem for the homogeneous equation associated with (1.14):

(1.17) equation

The solution operator for problem (1.17) is given by

(1.18) equation

Clearly,

equation

is the solution of (1.17). With this solution operator, Duhamel’s principle [see equation (1.13)] generalizes to our variable coefficient problem (1.14):

(1.19) equation

This can be proved in terms of general properties of the solution operator.

It is not difficult to show that (1.18) has the following properties:

1. S(t, t) = I. Here I represents the identity operator [i.e., Iv(t) = v(t)].
2. Let tt1 ≥ 0. Then

equation

3. S(t, r) is a smooth function of t and

equation

We shall use these properties to prove that (1.19) solves (1.14). Since S(0, 0) = I, we have y(0) = y0. Also,

equation

Therefore, y(t) given by (1.19) is the solution of (1.14).

Exercise 1.5 Find the solution operator and, using Duhamel’s principle, the solution of the following initial value problems.

(a)

equation

(b)

equation

(c)

equation

1.3 Perturbations and the concept of stability

Given a problem and perturbations to it, we want to know what effect the perturbations have on the solution.

As an example, consider the initial value problem

(1.20) equation

with λ ≠ −1. (The exceptional case of resonance, λ = −1, can be treated with slight modifications.)

The solution of (1.20) is the decaying function

equation

Now consider the same differential equation with perturbed initial data

(1.21) equation

where 0 < 1 is a small constant. Let w(t) = (t) − y(t) denote the difference between the perturbed and original solutions. Subtracting (1.20) from (1.21), we obtain

equation

whose solution is

(1.22) equation

Depending on the sign of Re λ, there are three possibilities.

1. Re λ < 0. In this case the perturbation term w(t) decays exponentially with time and the solution of the perturbed problem converges to the solution of the original problem as t increases. In Figure 1.5 the solid line represents the solution with λ = −7/12, and the dashed line is the solution with perturbed initial data.

Figure 1.5 Decaying perturbation.

2. Re λ = 0. The perturbation w(t) does not decrease with time, but it does not grow either (see Figure 1.6).

Figure 1.6 Non-decaying perturbation.

3. Re λ > 0. The perturbation grows exponentially in time. Figure 1.7 shows the perturbed and unperturbed solutions for λ = 1.

Figure 1.7 Exponentially growing perturbation.

In the latter case it will be very difficult to compute the original solution accurately in long time intervals. For example, if λ = 1 and ε = 10−10, then

equation

Therefore, if the calculation introduces an error ε = 10−10 at t = 0, this error will grow to w(T) = 1 at about T = 25. This growth holds even if no further errors, except the original error ε = 10−10 at t = 0, are introduced.

In applications the initial data and the forcing are never given exactly. Therefore, if Re λ > 0, one cannot guarantee that the answer computed is close to the correct answer. For Re λ < 0, the situation is the opposite: Initial errors in the data are wiped out. Problems corresponding to Re λ < 0 are called strongly stable. If Re λ = 0, the problem is stable but not strongly stable, and if Re λ > 0, the problem is unstable.

Next, let us perturb the forcing and consider

(1.23) equation

instead of (1.20). The error term w(t) = (t) − y(t) solves

(1.24) equation

and we obtain by Duhamel’s principle,

equation

Therefore, w(t) satisfies the estimate

(1.25) equation

We arrive at essentially the same conclusions as those for the perturbed initial data:

1. If the problem is strongly stable (i.e., Re λ < 0), the perturbation of the solution is bounded by

equation

that is, the difference of the solutions is of the same order as the perturbation of the forcing.
2. If the problem is stable but not strongly stable (i.e., Re λ = 0), we obtain

equation

Thus, the difference in the solutions can be estimated in terms of the integrated effect of the perturbation. This effect typically grows linearly with time. In most applications one can handle such situations and obtain accurate solutions by keeping the perturbations sufficiently small. For example, if ε = 10−10 and |G(t)| ≤ 1, it takes a very long time before the effect of the perturbation is noticed.
3. If Re λ > 0, the effect of the perturbation grows exponentially in time. In a long time interval, this may change the true solution drastically.

There are no difficulties in generalizing this observation to linear equations with variable coefficients:

(1.26) equation

The influence of perturbations of the forcing and of the initial data depends on the behavior of the solution operator

equation

Definition 1.3 Consider the linear initial value problem (1.26) and its solution operator S(t, s). The problem is called strongly stable, stable, or unstable if the solution operator satisfies, respectively, the following estimates:

equation

where δ is a positive constant.

Exercise 1.6 Consider, instead of (1.20), the initial value problem for the resonance case

equation

and the problem with perturbed initial data,

equation

Show that the same conclusions of the nonresonance case can be drawn for w(t) = y(t).

1.4 Nonlinear equations: the possibility of blow-up

Nonlinearities in the equation can produce a solution that blows up in finite time, that is, a solution that does not exist for all times. Consider, for example, the nonlinear initial value problem given by

(1.27) equation

For y0 = 0 the solution is y = 0 for all times. Therefore, assume that y0 ≠ 0 in the following. To calculate the solution, we write the differential equation in the form

equation

and integrate:

equation

The change of variables y(s) = v gives us

equation

and we obtain

equation

For y0 > 0 the solution blows up at t = 1/y0 (see Figure 1.8). This blow-up or divergence of the solution at a finite time is a consequence of the nonlinearity, that is, the term y2 on the right-hand side of the equation. This behavior cannot occur in a linear problem. On the other hand, if y0 < 0, the solution y(t) exists for all t ≥ 0 and converges to zero for t → ∞ (see Figure 1.9).

Figure 1.8 y0 > 0.

Figure 1.9 y0 < 0.

Consider now the more general problem

(1.28) equation

We give here without proof a simple version of the classical existence and uniqueness theorem for scalar ordinary differential equations (see, e.g., [3], chapt. 5).

Theorem 1.4 If f(y,t) andf(y,t)/∂y are continuous functions in a rectangle Ω = [y0b,y0 + b] x [t0a, t0 + a], a, b > 0, and |f(y,t)| ≤ M on Ω, there exists a unique, continuously differentiable solution y(t) to the problem (1.28) in the interval |t − t0| ≤ Δt = min{a, b/M}.

Remark 1.5 The time interval of existence depends on how large one can choose the rectangle, and so on the initial point (y0, t0). The solution can be continued to the future by solving the equation with new initial conditions starting at the point (y(t0 + Δt), t0 + Δt). If one tries to continue the solution as much as possible, there are two possibilities:

1. One can continue the solution to arbitrarily large times, that is, the solution exists for all times t ≥ 0.
2. There is a finite time T0 = T0(y0) > 0 such that the solution exists for all times t < T0 but not at T0. In this case the solution blows up at T0; that is, limtT0|y(t)| = ∞.

Exercise 1.7 Show that if a smooth solution y(t) to (1.28) blows up at finite time, its derivative dy/dt blows up at the same time. Hint: Use the mean value theorem.

Exercise 1.8 Show that the converse of exercise (1.7) is false. To this end, consider the initial value problem

equation

Explicitly find the solution y(t) and check that dy/dt → ∞ when t → (1/2) and that, nevertheless, y(t) stays bounded.

Exercise 1.9 Is it possible that the solution of the real equation

equation

blows up at a finite time? Explain the Answer.

1.5 Principle of linearization

Consider the initial value problem

(1.29) equation

Assume that

equation

A simple calculation shows that the solution of (1.29) is given by

equation

Let ε with 0 ≤ ε 1 be a small constant and consider the perturbed problem

(1.30) equation

Here G(t) is a smooth function with

(1.31) equation

By Section 1.3 we expect that in some time interval 0 ≤ tT,

equation2

Therefore, we make the following change of variables:

(1.32) equation

Introducing (1.32) into (1.30) gives us

equation

The form of F gives us

(1.33) equation

We expect that |u| ≤ 1 in some time interval 0 ≤ tT, and therefore we can neglect the quadratic term εu2 to obtain the linearized equation

(1.34) equation

In Section 1.3 we discussed the growth behavior of the solutions of (1.34). It depends on the solution operator

equation

Exercise 1.10 Prove, using the solution operator S(t, s), that the problem is strongly stable for λ < 0.

If the linearized equation is strongly stable (i.e., Re λ < 0), then

equation

and, using (1.31), is bounded for all times. In this case one can also show that, for sufficiently small ε,

equation

for all times. Thus, the linearized equation determines, to first approximation, the effect of the perturbation on the solution.

If the linearized equation is only stable, then

equation

and

equation

provided that

equation

Therefore, the linearized equation describes the behavior of the perturbation in every time interval 0 ≤ tT with T2ε 1.

If the linearized equation is unstable,

equation

and the time interval where the linearized equation (1.34) is a good approximation of (1.33) is restricted to a time interval 0 ≤ tT with

equation

This behavior is general in nature.

Consider the nonlinear equation

(1.35) equation

Assume that the solution y(t) of this problem is known. Consider a perturbation

(1.36) equation

We make the change of variables

equation

Since

equation

we obtain

(1.37) equation

Neglecting the quadratic terms, we obtain the linearized equation

(1.38) equation

The effect of the perturbation depends on the stability properties of (1.38).

Linearization is a very important tool because it is used to show that the nonlinear problem has a unique solution locally.

1The exceptional case, μ = λ, is called the case of resonance.

2From now on we frequently use the notation ; for a precise definition, consult Section A.2.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.3.204