It was remarked in Section 5.1 that the determinant function is the classic example of a multilinear function. In addition to its multilinearity, the determinant function has another characteristic feature—it is alternating. This chapter is devoted to an examination of tensors that have a corresponding property.
7.1 Multicovectors
Let V be a vector space of dimension m, and let s ≥ 1 be an integer. Following Section B.2, we denote by
the group of permutations on {1, … , s}. For each permutation σ in
, consider the linear map
defined by
(7.1.1)
for all tensors
in
and all vectors v1, ..., vs in V. By saying that σ is linear, we mean that for all tensors
, ℬ in
and all real numbers c,
There is potential confusion arising from (7.1.1), as a simple example illustrates. Let s = 3, let
be a tensor in
, and consider the permutation σ = (1 2 3) in
. According to (7.1.1),
for all vectors v1, v2, v3 in V. To be consistent, it seems that
should be interpreted as
, but this is incorrect. The issue is that the indices in (v1, v3, v2) are not sequential, which is implicit in the way (7.1.1) is presented. Setting (w1, w2, w3) = (v1, v3, v2), we have from (7.1.1) that
In most of the computations that follow, the indices are sequential, thereby avoiding this issue.
We say that a tensor
in
is symmetric if
for all permutations σ in
; that is,
for all vectors v1, … , vs in V. We denote the set of symmetric tensors in
by ∑s(V). It is easily shown that ∑s(V) is subspace of
.
A tensor ℬ in
is said to be alternating if τ(ℬ) = − ℬ for all transpositions τ in
, or equivalently, if
for all vectors v1,…, vs in V for all 1 ≤ i < j ≤ s. The set of alternating tensors in
is denoted by Λs(V). It is readily demonstrated that Λs(V) is a subspace of
. An element of Λs(V) is called an s‐covector or multicovector. When s = 1, the alternating criterion is vacuous, and a 1‐covector is simply a covector. Thus,
(7.1.2)
Since Λs(V) is a subspace of
, the zero multicovector in Λs(V) is precisely the zero tensor in
. A multicovector in Λs(V) is nonzero when it is nonzero as a tensor in
. To be consistent with (5.1.2), we define
(7.1.3)
With these definitions, the determinant function det : (Matm × 1)m → ℝ is seen to be a multicovector in Λm(Matm × 1).
There are several equivalent ways to characterize multicovectors inΛs(V).
We now introduce a way of associating a multicovector to a given tensor. Let V be a vector space. Alternating map is the family of linear maps
defined for s ≥ 0 by
(7.1.4)
for all tensors
in
.
In view of Theorem 7.1.3(b), we can replace the map in (7.1.4) with
7.2 Wedge Products
In Section 5.1, we introduced a type of multiplication of tensors called the tensor product. Our next task is to define a corresponding operation for multicovectors.
Let V be a vector space. Wedge product is the family of linear maps
defined for s, s′ ≥ 0 by
(7.2.1)
for all multicovectors η in Λs (V) and ζ in
. That is,
for all vectors v1,..., vs+s′ in V, where the first equality follows from (7.2.1), and the second equality from (5.1.4) and (7.1.1).
Wedge products behave well with respect to basic algebraic structure.
Any operation that purports to be a type of “multiplication” should be associative. The wedge product meets this requirement.
In light of the associativity of the wedge product, we drop parentheses and, for example, denote (η ∧ ζ) ∧ ξ and η ∧ (ζ ∧ ξ) byη ∧ ζ ∧ ξ, with corresponding notation for wedge products of more than three terms.
The next result is a generalization of Theorem 7.2.5.
The next result shows that wedge products and determinants are closely related, which is not so surprising.
Part (a) of the next result is a generalization of Theorem 1.2.1(e).
7.3 Pullback of Multicovectors
The pullback of covariant tensors was briefly considered in Section 5.2. The corresponding theory for multicovectors is far richer.
Before proceeding, we pause to consider multi‐index notation, which was discussed in Section 2.1 in the context of matrices. Let V be a vector space, let (h1,…,hm) be a basis for V, and let (θ1,…,θm) be its dual basis. For an integer 1 ≤ s ≤ m, let I = (i1,…,is) be a multi‐index in ℐs, m, and let us denote
and
In this notation, the unordered basis for Λs(V) in Theorem 7.2.12(a) can be expressed concisely as {θI : I ∈ ℐs, m}, and the identity in Theorem 7.2.13(a) becomes
If we order ℐs, m in some fashion, for example, lexicographically, then the basis {θI : I ∈ ℐs, m} of Theorem 7.2.12(a) can be similarly ordered, yielding an ordered basis for Λs(V), which we denote by
Kronecker's delta can be generalized to the multi‐index setting. Let J = (j1,…,js) be another multi‐index in ℐs, m, and define
Then the conditional identity in Theorem 7.2.9 is simply
. As discussed in Appendix A, the complement of the multi‐index (i) in ℐ1, m is
for i = 1,…, m. In this notation, the multicovectors comprising the basis in Theorem 7.2.12(e) can be expressed as
. We will find multi‐index notation of great utility in what follows.
Let V and W be vector spaces, and let A : V → W be a linear map. Pullback by A (for multicovectors) is the family of linear maps
defined for s ≥ 1 by
(7.3.1)
for all multicovectors η in Λs(W) and all vectors v1,…,vs in V. It follows from the multilinearity and alternating properties of η and the linearity of A that A*(η) is in Λs(V), so the definition makes sense. We refer to A*(η) as the pullback of η byA.
Special cases of Theorem 7.3.2 provide a number of useful identities.
The next result is reminiscent of Theorem 2.4.9.
7.4 Interior Multiplication
Let V be a vector space of dimension m, and let v be a vector in V. Interior multiplication by v is the family of linear maps
defined for s ≥ 2 by
for all multicovectors η in Λs(V) and all vectors w1,…,ws–1 in V. Since any m + 1 vectors in V are linearly dependent, it follows from Theorem 7.3.5 that iv is the zero map when s > m. Recalling from (7.1.2) and (7.1.3) that Λ1(V) = V* and Λ0(V) = ℝ, we extend the preceding definition to s = 1 as follows:
is given by
for all covectors η in Λ1(V). For s = 0, we trivially define iv= 0. Let us denote
Not surprisingly, interior multiplication behaves well with respect to basic algebraic structure, but its handling of wedge products is more complex.
The next result shows that interior multiplication satisfies a novel product rule.
7.5 Multicovector Scalar Product Spaces
In Section 4.5, we showed how to construct the dual of a scalar product space. Building on that foundation, we now generalize to multicovectors.
Let (V,
) be a scalar product space, let ℋ be a basis for V, and let Θ = (θ1,…,θm) be its dual basis. By Theorem 7.2.12(a), (θI : I∈ℐs, m) is a basis for Λs(V). We define a bilinear function
as follows. For multicovectors η, ζ in Λs(V), with
(7.5.1)
let
(7.5.2)
where
* is the scalar product on V* and
is the matrix of
* with respect to Θ.
It would appear that the definition of
Λ is dependent on the choice of basis for V. Remarkably, this turns out not to be the case:
We are now in a position to define the multicovector counterpart of the dual scalar product space described in Section 4.5.
Now that we have the scalar product space (Λs(V), gΛ), the construction in Section 4.5 yields the corresponding dual scalar product space (Λs(V)*, gΛ*) and the associated flat map and sharp map: