314 APPENDIX A Mathematical Background
has a unique solution
xf,
and the mapping f ~ xf is
continuous.
Proof.
Since x' --~ a(x, x') is continuous, there exists, by the Riesz theorem,
some element of X, which can be denoted Ax (a single symbol, for the
time being), such that (Ax, x') = a(x, x') for all x'. This defines a linear
continuous operator A from X into itself, injective by virtue of (8), and
Eq. (9) can then be written as Ax = f. This is equivalent to x- p(Ax- f)
= x, where p ~ 0 is a parameter that can be chosen at leisure. Let {Xn} be
the sequence defined by x 0 = 0 and x n +
1 -- (1 --
pA)x n + f)f. If it does
converge, the limit is the solution xf of Ax = f, and oc Ilxfll <
Ilfll
after (9),
hence the continuity of f -~ xf. The sequence will converge if II1 - pAll <
1, so let's compute:
I[x - pAx[[ 2 _<
iIxI[ 2-
2p (Ax, x)
+ p2
[[Ax[[2 _ i[x[[2 _ 2p0d[x[[2
+ p2
[[Ax][2
after (8), so II1 - pAll < 1 if 0 < p < IIAII2/2oc. (Note that no
symmetry
of a
was assumed or used.) 0
The standard application is then to the problem,
find
x ~ U g
such
that
a(x, x') = 0 V x' ~ U °, where a is a continuous bilinear map. By
picking some x g in ug,.thisu ° amounts to finding x in U ° such that
a(x ° + x g, x') = 0 V x'~ As seen by setting f(x') =- a(x g, x') and X =
U °, the lemma applies if the restriction of a to U ° is coercive.
As mentioned in Note 48, the need arises to extend all these notions
and results to complex spaces. This is most easily, if not most compactly,
done by
complexification.
The
complexifi'ed
U c of a vector space U is the
set U x U with composition laws induced by the following prescription:
An element u = {u R, ui} of LV being written in the form u = u R
+
iui, one
applies the usual rules of algebra, with
i 2 =-1.
Thus, u + u' = u R
+ iu I +
v ! ! v
U R q- lU I -- UR nt- U R +
i(uI + U z)'
and if
A = kR + i ;~I,
then
AU = (~1~ + i ~)(U R + iui) -- ~RUR- NU I + i(~u~- ~UR).
The
Hermitian
scalar product (u, v) of two complex vectors u = u R + iu I
and v = v R + iv~ is by convention the one obtained by developing the
product (u R + iu I, v R - ivI) after the same rules, so (u, v) = (u R, VR)
q- (Ui,
VI) q- i(ui, VR)- i (U R, VI). The norm of u is given by
I UI 2 = (U, U).
(Be aware that a different convention is adopted in Chapter 8, where
expressions such as (rot u) 2 are understood as rot u . rot u, not as
Irotu 12.)
Now, when X is complex, all things said up to now remain valid, if
(x, y) is understood as the Hermitian scalar product, with obvious
adjustments: f is
complex-valued,
and the Riesz vector is no longer
linear, but
anti-linear
with respect to f (to multiply f by ~, multiplies xf
A.4 GLIMPSES OF FUNCTIONAL ANALYSIS
315
by ~*). The form a in the Lax-Milgram lemma becomes "sesqui"-linear
(anti-linear with respect to the second argument), and the same
computation as above yields the same result, provided
Re[a(x, x)] > c~
lixII 2
V x ~ X,
with 0~ > 0, which is what "coercive" means in the complex case.
Remark
A.10. The lemma remains valid if Ka is coercive, in this sense,
for some complex number ;~. We make use of this in 8.1.3, where the
problem is of the form
find
x ~ U g
such that
a(x, x') = 0 V x' ~ U °, with
(1 - i)a coercive over U °.
The theory does not stop there. Next steps would be about orthonormal
bases and Fourier coefficients, whose treatment here would be out of
proportion with the requirements of the main text. Let's just mention
(because it is used once in Chapter 9) the notion of weak convergence: A
sequence {x n : n ~ IN}
weakly converges
toward x if
limn_ , ~ (X n, y) = (X, y) V y ~ X.
This is ususally denoted by x n ~ x. By continuity of the scalar product,
convergence in the standard sense (then named "strong convergence" for
contrast) implies weak convergence, but not the other way around: for
instance, the sequence of functions x --~ sin nx, defined on [-1, 1],
converges to 0 weakly, but not strongly. However, weak convergence
plus
convergence of the
norm
is equivalent to strong convergence.
Compact
operators are those that map weakly convergent sequences
to strongly convergent ones. It's not possible to do justice to their theory
here. Let's just informally mention that, just as Hilbert space is what
most closely resembles Euclidean space among infinite-dimensional
functional spaces, compact operators are the closest kin to matrices in
infinite dimension, with in particular similar spectral properties (existence
of eigenvalues and associated eigenvectors). An important result in this
theory,
Fredholm's alternative,
is used in Chapter 9. Cf. [Yo] on this.
A.4.4 Closed linear relations, adjoints
The notion of
adjoint
is essential to a full understanding of the relations
between grad and div, the peculiarities of rot, and integration by parts
formulas involving these operators.
We know (p. 284) what a linear relation A : X ~ Y is: one the graph
A of which is a subspace of the vector space X x Y. If the relation is
316 APPENDIX A Mathematical Background
functional, i.e., if the section A x contains no more than one element, we
have a linear operator. By linearity, this amounts to saying that the only
pair in X x Y of the form {0, y} that may belong to A is {0, 0}.
Suppose now X and Y are Hilbert spaces, with respective scalar
products ( , )x and ( , )y. Whether A is closed, with respect to the
metric induced on X x Y by the scalar product ({x, y}, {x', y'}) = (x, x') x +
(y,
y')y, is a legitimate question. If A is continuous, its graph is certainly
closed, for if a sequence of pairs {x n, Ax} belonging to A converges to
some pair {x, y}, then y = Ax. The converse is not true (Remark A.4), so
we are led to introduce the notion of
closed
operator, as one the graph of
which is closed.
Now if the graph of a linear relation {X, Y, A} is not closed, why not
consider its
closure
{X, Y, ,~}? We get a new relation this way, which is
an extension of the given one. But it may fail to be functional, because
pairs of the form {0, y} with y ~ 0 may happen to be adherent to A.
Hence the following definition: An operator is
closable
if the closure of
this graph is functional. In Chapter 5, we work out in detail the case of
div: IL2(D) --4 La(D), with domain C~(D), find it closable, and define the
"weak" divergence as its closure. The new operator thus obtained has
an enlarged domain (denoted
IL2div(D)) and
is, of course, closed, but not
continuous on ILa(D).
There is a way to systematically obtain closed operators. Start from
some operator A, and take the orthogonal A" of its graph in X x Y.
This is, as we know, a closed subspace of the Cartesian product. Now
consider the relation {Y, X, A ± }, with X as target space.
If
this happens
to be a functional relation, we denote by - A* the corresponding operator,
which thus will satisfy the identity
(11) (x, A* Y)x = (Y, Ax)y V {x, y} ~ A,
and call A'man operator of type Y --~ X--the
adjoint 49
of A.
So when is A ± functional? The following statement gives the answer:
Proposition A.1.
Let
A = {X, Y, A }
be a given linear relation. The relation
{Y, X, A ± }
is functional if and only if
dom(A)
is dense in X.
Proof.
If {x, 0} ~ A ± , then (x, ~)x = (0, A~)y = 0 for all ~ ~ dom(A), after
(11). So if dom(A) is dense, then x = 0, and A ± is functional.
Conversely, if dora(A) is not dense, there is some x ~ 0 in the
49Not to
be confused with the
dual
of A, similarly defined, but going from the dual Y'
of Y to the dual X' of X. The notion of adjoint is specifically Hilbertian.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.193.158