Chapter 9

Linear Maps in 3D

The flight simulator is an important part in the training of airplane pilots. It has a real cockpit, but what you see outside the windows is computer imagery. As you take a right turn, the terrain below changes accordingly; as you dive downwards, it comes closer to you. When you change the (simulated) position of your plane, the simulation software must recompute a new view of the terrain, clouds, or other aircraft. This is done through the application of 3D affine and linear maps.1 Figure 9.1 shows an image that was generated by an actual flight simulator. For each frame of the simulated scene, complex 3D computations are necessary, most of them consisting of the types of maps discussed in this section.

Figure 9.1

Figure showing flight simulator: 3D linear maps are necessary to create the twists and turns in a flight simulator. (Image is from the NASA website http://www.nasa.gov.)

Flight simulator: 3D linear maps are necessary to create the twists and turns in a flight simulator. (Image is from the NASA website http://www.nasa.gov.)

9.1 Matrices and Linear Maps

The general concept of a linear map in 3D is the same as that for a 2D map. Let v be a vector in the standard [e1, e2, e3]-coordinate system, i.e.,

v=v1e1+v2e2+v3e3.v=v1e1+v2e2+v3e3.

(See Sketch 9.1 for an illustration.)

Sketch 9.1

Sketch showing a vector in the [e1, e2, e3]-coordinate system.

A vector in the [e1, e2, e3]-coordinate system.

Let another coordinate system, the [a1, a2, a3]-coordinate system, be given by the origin o and three vectors a1, a2, a3. What vector v′ in the [a1, a2, a3]-system corresponds to v in the [e1, e2, e3]-system? Simply the vector with the same coordinates relative to the [a1, a2, a3]-system. Thus,

v=v1a1+v2a2+v3a3.(9.1)v=v1a1+v2a2+v3a3.(9.1)

This is illustrated by Sketch 9.2 and the following example.

Sketch 9.2

Sketch showing the matrix A maps v in the [e1, e2, e3]-coordinate system to the vector v′ in the [a1, a2, a3]-coordinate system.

The matrix A maps v in the [e1, e2, e3]-coordinate system to the vector v′ in the [a1, a2, a3]-coordinate system.

Example 9.1

Let

v=[112],a1=[201],a2=[010],a3=[001/2].v=112,a1=201,a2=010,a3=001/2.

Then

v=1[201]+1[010]+2[001/2]=[212].v=1201+1010+2001/2=212.

You should recall that we had the same configuration earlier for the 2D case—(9.1) corresponds directly to (4.2) of Section 4.1. In Section 4.2, we then introduced the matrix form. That is now an easy project for this chapter—nothing changes except the matrices will be 3 × 3 instead of 2 × 2. In 3D, a matrix equation looks like this:

v=Av,(9.2)v=Av,(9.2)

i.e., just the same as for the 2D case. Written out in detail, there is a difference:

[v1v2v3]=[a1,1a1,2a1,3a2,1a2,2a2,3a3,1a3,2a3,3] [v1v2v3].(9.3)v1v2v3=a1,1a2,1a3,1a1,2a2,2a3,2a1,3a2,3a3,3 v1v2v3.(9.3)

All matrix properties from Chapter 4 carry over almost verbatim.

Example 9.2

Returning to our example, it is quite easy to condense it into a matrix equation:

[200010101/2] [112]=[212].201010001/2 112=212.

Again, if we multiply a matrix A by a vector v, the ith component of the result vector is obtained as the dot product of the ith row of A and v.

The matrix A represents a linear map. Given the vector v in the [e1, e2, e3]-system, there is a vector v′ in the [a1, a2, a3]-system such that v′ has the same components in the [a1, a2, a3]-system as did v in the [e1, e2, e3]-system. The matrix A finds the components of v′ relative to the [e1, e2, e3]-system.

With the 2 × 2 matrices of Section 4.2, we introduced the transpose AT of a matrix A. We will need this for 3 × 3 matrices, and it is obtained by interchanging rows and columns, i.e.,

[234394194]T=[231399444].231399444T=234394194.

The boldface row of A has become the boldface column of AT. As a concise formula,

aTi,j=aj,i.aTi,j=aj,i.

9.2 Linear Spaces

The set of all 3D vectors is referred to as a 3D linear space or vector space, and it is denoted as ℝ3. We associate with ℝ3 the operation of forming linear combinations. This means that if v and w are two vectors in this linear space, then any vector

u=sv+tw(9.4)u=sv+tw(9.4)

is also in this space. The vector u is then said to be a linear combination of v and w. This is also called the linearity property. Notice that the linear combination (9.4) combines scalar multiplication and vector addition. These are the key operations necessary for a linear space.

Select two vectors v and w and consider all vectors u that may be expressed as (9.4) with arbitrary scalars s, t. Clearly, all vectors u form a subset of all 3D vectors. But beyond that, they form a linear space themselves—a 2D space. For if two vectors u1 and u2 are in this space, then they can be written as

u1=s1v+t1wandu2=s2v+t2w,u1=s1v+t1wandu2=s2v+t2w,

and thus any linear combination of them can be written as

αu1+βu2=(αs1+βs2)v+(αt1+βt2)w,αu1+βu2=(αs1+βs2)v+(αt1+βt2)w,

which is again in the same space. We call the set of all vectors of the form (9.4) a subspace of the linear space of all 3D vectors. The term subspace is justified since not all 3D vectors are in it. Take for instance the vector n = vw, which is perpendicular to both v and w. There is no way to write this vector as a linear combination of v and w!

We say our subspace has dimension 2 since it is generated, or spanned, by two vectors. These vectors have to be noncollinear; otherwise, they just define a line, or a 1D (1-dimensional) subspace. (In Section 2.8, we needed the concept of a subspace in order to find the orthogonal projection of w onto v. Thus the projection lived in the one-dimensional subspace formed by v.)

If two vectors are collinear, then they are also called linearly dependent. If v and w are linearly dependent, then v = sw. Conversely, if they are not collinear, they are called linearly independent. If v1, v2, v3 are linearly independent, then we will not have a solution set s1, s2 for

v3=s1v1+s2v2,v3=s1v1+s2v2,

and the only way to express the zero vector,

0=s1v1+s2v2+s3v30=s1v1+s2v2+s3v3

is if s1 = s2 = s3 = 0. Three linearly independent vectors in ℝ3 span the entire space and the vectors are said to form a basis for ℝ3.

Given two linearly independent vectors v and w, how do we decide if another vector u is in the subspace spanned by v and w? Simple: check the volume of the parallelepiped formed by the three vectors, which is equivalent to calculating the scalar triple product (8.17) and checking if it is zero (within a round-off tolerance). In Section 9.8, we’ll introduce the 3 × 3 determinant, which is a matrix-oriented calculation of volume.

We’ll revisit this topic in a more abstract setting for n-dimensional vectors in Chapter 14.

9.3 Scalings

A scaling is a linear map that enlarges or reduces vectors:

v=[s1,1000s2,2000s3,3]v.(9.5)v=s1,1000s2,2000s3,3v.(9.5)

If all scale factors si,i are larger than one, then all vectors are enlarged, as is done in Figure 9.2. If all si,i are positive yet less than one, all vectors are shrunk.

Figure 9.2

Figure showing scalings in 3D: the large torus is scaled by 1/3 in each coordinate to form the small torus.

Scalings in 3D: the large torus is scaled by 1/3 in each coordinate to form the small torus.

Example 9.3

In this example,

[s1,1000s2,2000s3,3]=[1/300010003],s1,1000s2,2000s3,3=1/300010003,

we shrink in the e1-direction, leave the e2-direction unchanged, and stretch the e3-direction. See Figure 9.3.

Figure 9.3

Figure showing nonuniform scalings in 3D: the “standard” torus is scaled by 1/3, 1, 3 in the e1-, e2-, e3-directions, respectively

Nonuniform scalings in 3D: the “standard” torus is scaled by 1/3, 1, 3 in the e1-, e2-, e3-directions, respectively.

Negative numbers for the si,i will cause a flip in addition to a scale. So, for instance

[200010001]200010001

will stretch and reverse in the e1-direction, leave the e2-direction unchanged, and will reverse in the e3-direction.

How do scalings affect volumes? If we map the unit cube, given by the three vectors e1, e2, e3 with a scaling, we get a rectangular box. Its side lengths are s1,1 in the e1-direction, s2,2 in the e2-direction, and s3,3 in the e3-direction. Hence, its volume is given by s1,1 s2,2 s3,3. A scaling thus changes the volume of an object by a factor that equals the product of the diagonal elements of the scaling matrix.2

For 2 × 2 matrices in Chapter 4, we developed a geometric understanding of the map through illustrations of the action ellipse. For example, a nonuniform scaling is illustrated in Figure 4.3. In 3D, the same idea works as well. Now we can examine what happens to 3D unit vectors forming a sphere. They are mapped to an ellipsoid—the action ellipsoid! The action ellipsoid corresponding to Figure 9.2 is simply a sphere that is smaller than the unit sphere. The action ellipsoid corresponding to Figure 9.3 has its major axis in the e3-direction and its minor axis in the e1-direction. In Chapter 16, we will relate the shape of the ellipsoid to the linear map.

9.4 Reflections

If we reflect a vector about the e2, e3-plane, then its first component should change in sign:

[v1v2v3][v1v2v3],v1v2v3v1v2v3,

as shown in Sketch 9.3.

Sketch 9.3

Sketch showing reflection of a vector about the e2, e3-plane.

Reflection of a vector about the e2, e3-plane.

This reflection is achieved by a scaling matrix:

[v1v2v3]=[100010001] [v1v2v3].v1v2v3=100010001 v1v2v3.

The following is also a reflection, as Sketch 9.4 shows:

Sketch 9.4

Sketch showing reflection of a vector about the x1 = x3 plane.

Reflection of a vector about the x1 = x3 plane.

[v1v2v3][v3v2v1].v1v2v3v3v2v1.

It interchanges the first and third component of a vector, and is thus a reflection about the plane x1 = x3. (This is an implicit plane equation, as discussed in Section 8.4.)

This map is achieved by the following matrix equation:

[v3v2v1]=[001010100] [v1v2v3].v3v2v1=001010100 v1v2v3.

In Section 11.5, we develop a more general reflection matrix, called the Householder matrix. Instead of reflecting about a coordinate plane, with this matrix, we can reflect about a given (unit) normal. This matrix is central to the Householder method for solving a linear system in Section 13.1.

By their very nature, reflections do not change volumes—but they do change their signs. See Section 9.8 for more details.

9.5 Shears

What map takes a cube to the parallelepiped (skew box) of Sketch 9.5? The answer: a shear. Shears in 3D are more complicated than the 2D shears from Section 4.7 because there are so many more directions to shear. Let’s look at some of the shears more commonly used.

Sketch 9.5

Sketch showing a 3D shear parallel to the e1, e2-plane.

A 3D shear parallel to the e1, e2-plane.

Consider the shear that maps e1 and e2 to themselves, and that also maps e3 to

a3=[ab1].a3=ab1.

The shear matrix S1 that accomplishes the desired task is easily found:

S1=[10a01b001].S1=100010ab1.

It is illustrated in Sketch 9.5 with a = 1 and b = 1, and in Figure 9.4. Thus this map shears parallel to the [e1, e2]-plane. Suppose we apply this shear to a vector v resulting in

v=Sv=[v1+av3v2+bv3v3].v=Sv=v1+av3v2+bv3v3.

Figure 9.4

Figure showing shears in 3D: a paraboloid is sheared in the e1- and e2-directions. The e3-direction runs through the center of the left paraboloid.

Shears in 3D: a paraboloid is sheared in the e1- and e2-directions. The e3-direction runs through the center of the left paraboloid.

An ai,j element is a factor by which the jth component of v affects the ith component of v′.

What shear maps e2 and e3 to themselves, and also maps

[abc]to[a00]?abctoa00?

This shear is given by the matrix

S2=[100ba10ca01].(9.6)S2=1baca010001.(9.6)

One quick check gives:

[100ba10ca01] [abc]=[a00];1baca010001 abc=a00;

thus our map does what it was meant to do: it shears parallel to the [e2, e3]-plane. (This is the shear of the Gauss elimination step that we will encounter in Section 12.2.)

Although it is possible to shear in any direction, it is more common to shear parallel to a coordinate axis or coordinate plane. Try constructing a matrix for a shear parallel to the [e1, e3]-plane.

The shear matrix

[1ab010001]100a10b01

shears parallel to the e1-axis. Matrices for the other axes follow similarly.

How does a shear affect volume? For a geometric feeling, notice the simple shear S1 from above. It maps the unit cube to a skew box with the same base and the same height—thus it does not change volume! All shears are volume preserving. After reading Section 9.8, revisit these shear matrices and check the volumes for yourself.

9.6 Rotations

Suppose you want to rotate a vector v around the e3-axis by 90° to a vector v′. Sketch 9.6 illustrates such a rotation:

Sketch 9.6

Sketch showing rotation example.

Rotation example.

v=[201]v=[021].v=201v=021.

A rotation around e3 by different angles would result in different vectors, but they all will have one thing in common: their third components will not be changed by the rotation. Thus, if we rotate a vector around e3, the rotation action will change only its first and second components. This suggests another look at the 2D rotation matrices from Section 4.6. Our desired rotation matrix R3 looks much like the one from (4.16):

R3=[cosαsinα0sinαcosα0001].(9.7)R3=cosαsinα0sinαcosα0001.(9.7)

Figure 9.5 illustrates the letter L rotated through several angles about the e3-axis.

Figure 9.5

Figure showing rotations in 3D: the letter L rotated about the e3-axis.

Rotations in 3D: the letter L rotated about the e3-axis.

Example 9.4

Let us verify that R3 performs as promised with a = 90°:

[010100001] [201]=[021],010100001 201=021,

so it works!

Similarly, we may rotate around the e2-axis; the corresponding matrix is

R2=[cosα0sinα010sinα0cosα].(9.8)R2=cosα0sinα010sinα0cosα.(9.8)

Notice the pattern here. The rotation matrix for a rotation about the ei-axis is characterized by the ith row being eTieTi and the ith column being ei. For completeness, the last rotation matrix about the e1-axis:

R1=[1000cosαsinα0sinαcosα].(9.9)R1=1000cosαsinα0sinαcosα.(9.9)

The direction of rotation by a positive angle follows the right-hand rule: curl your fingers with the rotation, and your thumb points in the direction of the rotation axis.

If you examine the column vectors of a rotation matrix, you will see that each one is a unit length vector and they are orthogonal to each other. Thus, the column vectors form an orthonormal set of vectors, and a rotation matrix is an orthogonal matrix. (These properties hold for the row vectors of the matrix too.) As a result, we have that

RTR=IRT=R1RTR=IRT=R1

Additionally, if R rotates by θ, then R−1 rotates by −θ.

How about a rotation by α degrees around an arbitrary vector a? The principle is illustrated in Sketch 9.7. The derivation of the following matrix is more tedious than called for here, so we just give the result:

R=[a21+C(1a21)a1a2(1C)a3Sa1a3(1C)+a2Sa1a2(1C)+a3Sa22+C(1a22)a2a3(1C)a1Sa1a3(1C)a2Sa2a3(1C)+a1Sa23+C(1a23)],(9.10)R=a21+C(1a21)a1a2(1C)+a3Sa1a3(1C)a2Sa1a2(1C)a3Sa22+C(1a22)a2a3(1C)+a1Sa1a3(1C)+a2Sa2a3(1C)a1Sa23+C(1a23),(9.10)

Sketch 9.7

Sketch showing rotation about an arbitrary vector.

Rotation about an arbitrary vector.

where we have set C = cos α and S = sin α. It is necessary that ||a|| = 1 in order for the rotation to take place without scaling. Figure 9.6 illustrates two examples of rotations about an arbitrary axis.

Figure 9.6

Figure showing rotations in 3D: the letter L is rotated about axes that are not the coordinate axes. On the right the point on the L that touches the rotation axes does not move.

Rotations in 3D: the letter L is rotated about axes that are not the coordinate axes. On the right the point on the L that touches the rotation axes does not move.

Example 9.5

With a complicated result such as (9.10), a sanity check is not a bad idea. So let α = 90°,

a=[001]andv=[100].a=001andv=100.

This means that we want to rotate v around a, or the e3-axis, by 90° as shown in Sketch 9.8. In advance, we know what R should be. In (9.10), C = 0 and S = 1, and we calculate

R=[010100001],R=010100001,

Sketch 9.8

Sketch showing a simple example of a rotation about a vector.

A simple example of a rotation about a vector.

which is the expected matrix. We obtain

v=[010].v=010.

With some confidence that (9.10) works, let’s try a more complicated example.

Example 9.6

Let α = 90°

a=[131313]andv=[100].a=131313andv=100.

With C = 0 and S = 1 in (9.10), we calculate

R=[13131313+1313+13131313131313+1313],R=1313+13131313131313+1313+13131313,

We obtain

v=[1313+131313].v=1313+131313.

Convince yourself that ||v′|| = ||v||.

Continue this example with the vector

v=[111].v=111.

Surprised by the result?

It should be intuitively clear that rotations do not change volumes. Recall from 2D that rotations are rigid body motions.

9.7 Projections

Projections that are linear maps are parallel projections. There are two categories. If the projection direction is perpendicular to the projection plane then it is an orthogonal projection, otherwise it is an oblique projection.Two examples are illustrated in Figure 9.7, in which one of the key properties of projections is apparent: flattening. The orthogonal and oblique projection matrices that produced this figure are

[100010000]and[101/2011/2000],100010000and1000101/21/20,

Figure 9.7

Figure showing projections in 3D: on the left is an orthogonal projection, and on the right is an oblique projection of 45°.

Projections in 3D: on the left is an orthogonal projection, and on the right is an oblique projection of 45°.

respectively.

Projections are essential in computer graphics to view 3D geometry on a 2D screen. A parallel projection is a linear map, as opposed to a perspective projection, which is not. A parallel projection preserves relative dimensions of an object, thus it is used in drafting to produce accurate views of a design.

Recall from 2D, Section 4.8, that a projection reduces dimensionality and it is an idempotent map. It flattens geometry because a projection matrix P is rank deficient; in 3D this means that a vector is projected into a subspace, which can be a (2D) plane or (1D) line. The idempotent property leaves a vector in the subspace of the map unchanged by the map, Pv = P2v. Let’s see how to construct an orthogonal projection in 3D.

First we choose the subspace U into which we would like to project. If we want to project onto a line (1D subspace), specify a unit vector u1. If we want to project into a plane (2D subspace), specify two orthonormal vectors u1, u2. Now form a matrix Ak from the vectors defining the k-dimensional subspace U:

A1=u1orA2=[u1u2].A1=u1orA2=[u1u2].

The projection matrix Pk is then defined as

Pk=AkATk.Pk=AkATk.

It follows that P1 is very similar to the projection matrix from Section 4.8 except the projection line is in 3D,

P1=A1AT1=[u1,1u1u2,1u1u3,1u1].P1=A1AT1=[u1,1u1u2,1u1u3,1u1].

Projection into a plane takes the form,

P2=A2AT2=[u1u2] [uT1uT2].(9.11)P2=A2AT2=[u1u2] [uT1uT2].(9.11)

Expanding this, we see the columns of P2 are linear combinations of u1 and u2,

P2=[u1,1u1+u1,2u2u2,1u1+u2,2u2u3,1u1+u3,2u2].P2=[u1,1u1+u1,2u2u2,1u1+u2,2u2u3,1u1+u3,2u2].

The action of P1 and P2 is thus

P1v=(uv)u,(9.12)P1v=(uv)u,(9.12)

P2v=(u1v)u1+(u2v)u2.(9.13)P2v=(u1v)u1+(u2v)u2.(9.13)

An application of these projections is demonstrated in the Gram-Schmidt orthonormal coordinate frame construction in Section 11.8.

Example 9.7

Let’s construct an orthogonal projection P2 into the [e1, e2]-plane. Although this example is easy enough to write down the matrix directly, let’s construct it with (9.11),

P2=[e1e2] [eT1eT2]=[100010000]P2=[e1e2] [eT1eT2]=100010000

The action achieved by this linear map is

[v1v20]=[100010000] [v1v2v3].v1v20=100010000 v1v2v3.

See Sketch 9.9 and Figure 9.7 (left).

Sketch 9.9

Sketch showing projection example.

Projection example.

The example above is very simple, and we can immediately see that the projection direction is d = [0 0 ± 1]T. This vector satisfies the equation

P2d=0,P2d=0,

and we see that the projection direction is in the kernel of the map.

The idempotent property for P2 is easily understood by noticing that AT2A2AT2A2 is simply the 2 × 2 identity matrix,

P22=A2AT2A2AT2=[u1u2] [uT1uT2] [u1u2] [uT1uT2]=[u1u2]I[uT1uT2]=P2.P22=A2AT2A2AT2=[u1u2] [uT1uT2] [u1u2] [uT1uT2]=[u1u2]I[uT1uT2]=P2.

In addition to being idempotent, orthogonal projection matrices are symmetric. The action of the map is Pv and this vector is orthogonal to vPv, thus

0=(Pv)T(vPv)=vT(PTPTP)v,0=(Pv)T(vPv)=vT(PTPTP)v,

from which we conclude that P = PT.

We will examine oblique projections in the context of affine maps in Section 10.4. Finally, we note that a projection has a significant effect on the volume of an object. Since everything is flat after a projection, it has zero 3D volume.

9.8 Volumes and Linear Maps: Determinants

Most linear maps change volumes; some don’t. Since this is an important aspect of the action of a map, this section will discuss the effect of a linear map on volume. The unit cube in the [e1, e2, e3]-system has volume one. A linear map A will change that volume to that of the skew box spanned by the images of e1, e2, e3, i.e., by the volume spanned by the vectors a1, a2, a3—the column vectors of A. What is the volume spanned by a1, a2, a3?

First, let’s look at what we have done so far with areas and volumes. Recall the 2 × 2 determinant from Section 4.9. Through Sketch 4.8, the area of a 2D parallelogram was shown to be equivalent to a determinant. In fact, in Section 8.2 it was shown that the cross product can be used to calculate this area for a parallelogram embedded in 3D. With a very geometric approach, the scalar triple product of Section 8.5 gives us the means to calculate the volume of a parallelepiped by simply using a “base area times height” calculation. Let’s revisit that formula and look at it from the perspective of linear maps.

So, using linear maps, we want to illustrate that the volume of the parallelepiped, or skew box, simply reduces to a 3D determinant calculation. Proceeding directly with a sketch in the 3D case would be difficult to follow. For 3D, let’s augment the determinant idea with the tools from Section 5.4. There we demonstrated how shears— area-preserving linear maps—can be used to transform a matrix to upper triangular. These are the forward elimination steps of Gauss elimination.

First, let’s introduce a 3 × 3 determinant of a matrix A. It is easily remembered as an alternating sum of 2 × 2 determinants:

|A|=a1,1|a2,2a2,3a3,2a3,3|a2,1|a1,2a1,3a3,2a3,3|+a3,1|a1,2a1,3a2,2a2,3|.(9.14)|A|=a1,1a2,2a3,2a2,3a3,3a2,1a1,2a3,2a1,3a3,3+a3,1a1,2a2,2a1,3a2,3.(9.14)

The representation in (9.14) is called the cofactor expansion. Each (signed) 2 × 2 determinant is the cofactor of the ai,j it is paired with in the sum. The sign comes from the factor (−1)i+j. For example, the cofactor of a2,1 is

(1)2+1|a1,2a1,3a3,2a3,3|.(1)2+1a1,2a3,2a1,3a3,3.

The cofactor is also written as (−1)i+jMi,j where Mi,j is called the minor of ai,j. As a result, (9.14) is also known as expansion by minors. We’ll look into this method more in Section 12.6.

If (9.14) is expanded, then an interesting form for writing the determinant arises. The formula is nearly impossible to remember, but the following trick is not. Copy the first two columns after the last column. Next, form the product of the three “diagonals” and add them. Then, form the product of the three “antidiagonals” and subtract them. The three “plus” products may be written as:

a1,1a1,2a1,3a2,2a2,3a2,1a3,3a3,1a3,2a1,1a1,2a2,2a1,3a2,3a3,3a2,1a3,1a3,2

and the three “minus” products as:

a1,3a1,1a1,2a2,2a2,3a2,1a3,1a3,2a3,3.a3,1a2,2a3,2a1,3a2,3a3,3a1,1a2,1a1,2.

The complete formula for the 3 × 3 determinant is

|A|=a1,1a2,2a3,3+a1,2a2,3a3,1+a1,3a2,1a3,2a3,1a2,2a1,3a3,2a2,3a1,1a3,3a2,1a1,2.|A|=a1,1a2,2a3,3+a1,2a2,3a3,1+a1,3a2,1a3,2a3,1a2,2a1,3a3,2a2,3a1,1a3,3a2,1a1,2.

Example 9.8

What is the volume spanned by the three vectors

a1=[400],a2=[144],a3=[0.10.10.1]?a1=400,a2=144,a3=0.10.10.1?

All we have to do is to compute

det[a1,a2,a3]=4|40.140.1|=4(4×0.1)(0.1)×4)=3.2.det[a1,a2,a3]=4440.10.1=4(4×0.1)(0.1)×4)=3.2.

(Here we used an alternative notation, det, for the determinant.) In this computation, we did not write down zero terms.

As we have seen in Section 9.5, a 3D shear preserves volume. Therefore, we can apply a series of shears to the matrix A, resulting in a new matrix

˜A=[˜a1,1˜a1,2˜a1,30˜a2,2˜a2,300˜a3,3].A˜=a˜1,100a˜1,2a˜2,20a˜1,3a˜2,3a˜3,3.

The determinant of ˜AA˜ is

|˜A|=˜a1,1˜a2,2˜a3,3,(9.15)A˜=a˜1,1a˜2,2a˜3,3,(9.15)

with of course, |A|=|˜A||A|=A˜.

Let’s continue with Example 9.8. One simple row operation, row3 = row3 − row2, will achieve the upper triangular matrix

˜A=[410.1040.1000.2],A˜=4001400.10.10.2,

and we can determine that |˜A|=|A|A˜=|A|.

For 3 × 3 matrices, we don’t actually calculate the volume of three vectors by proceeding with the forward elimination steps, or shears.3 We would just directly calculate the 3 × 3 determinant from (9.14). What is interesting about this development is now we can illustrate, as in Sketch 9.10, how the determinant defines the volume of the skew box. The first two column vectors of ˜AA˜ lie in the [e1, e2]-plane. Their determinant defines the area of the parallelogram that they span; this determinant is ˜a1,1˜a2,2a˜1,1a˜2,2. The height of the skew box is simply the e3 component of ˜a3a˜3. Thus, we have an easy to visualize interpretation of the 3 × 3 determinant. And, from a slightly different perspective, we have revisited the geometric development of the determinant as the scalar triple product (Section 8.5).

Sketch 9.10

Sketch showing determinant and volume in 3D.

Determinant and volume in 3D.

Let’s conclude this section with some rules for determinants. Suppose we have two 3 × 3 matrices, A and B. The column vectors of A are [a1a2a3][a1a2a3].

  • The determinant of the transpose matrix equals that of the matrix: |A| = |AT|. This property allows us to interchange the terms “row” or “column” when working with determinants.
  • Exchanging two columns, creating a noncyclic permutation, changes the sign of the determinant: |a2a1a3|=|A||a2a1a3|=|A|.
  • Multiplying one column by a scalar c results in the determinant being multiplied by c: |ca1a2a3|=c|A||ca1a2a3|=c|A|.
  • As an extension of the previous item: |cA| = c3|A|.
  • If A has a row of zeroes then |A| = 0.
  • If A has two identical rows then |A| = 0.
  • The sum of determinants is not the determinant of the sum, |A| + |B| ≠ |A + B|, in general.
  • The product of determinants is the determinant of the product: |AB| = |A||B|.
  • Multiples of rows can be added together without changing the determinant. For example, the shears of Gauss elimination do not change the determinant, as we observed in the simple example above.

    The determinant of the shear matrix S2 in (9.6) is one, thus |S2A| = |S2||A| = |A|.

  • A being invertible is equivalent to |A| ≠ 0 (see Section 9.10).
  • If A is invertible then

|A1|=1|A|.A1=1|A|.

9.9 Combining Linear Maps

If we apply a linear map A to a vector v and then apply a map B to the result, we may write this as

v=BAv.v=BAv.

Matrix multiplication is defined just as in the 2D case; the element ci,j of the product matrix C = BA is obtained as the dot product of the ith row of B with the jth column of A. A handy way to write the matrices so as to keep the dot products in order is

image

Instead of a complicated formula, an example should suffice.

image

We have computed the dot product of the boldface row of B and column of A to produce the boldface entry of C. In this example, B and A are 3 × 3 matrices, and thus the result is another 3 × 3 matrix. In the example in Section 9.1, a 3 × 3 matrix A is multiplied by a 3 × 1 matrix (vector) v resulting in a 3 × 1 matrix or vector. Thus two matrices need not be the same size in order to multiply them. There is a rule, however! Suppose we are to multiply two matrices A and B together as AB. The sizes of A and B are

m×nandn×p,(9.16)m×nandn×p,(9.16)

respectively. The resulting matrix will be of size m × p—the “outside” dimensions in (9.16). In order to form AB, it is necessary that the “inside” dimensions, both n here, be equal. The matrix multiplication scheme from Section 4.2 simplifies hand-calculations by illustrating the resulting dimensions.

As in the 2D case, matrix multiplication does not commute! That is, ABBA in most cases. An interesting difference between 2D and 3D is the fact that in 2D, rotations did commute; however, in 3D they do not. For example, in 2D, rotating first by α and then by β is no different from doing it the other way around. In 3D, that is not the case. Let’s look at an example to illustrate this point.

Example 9.9

Let’s look at a rotation by −90° around the e1-axis with matrix R1 and a rotation by −90° around the e3-axis with matrix R3:

R1=[100001010]andR3=[010100001].R1=100001010andR3=010100001.

Figure 9.8 illustrates what the algebra tells us:

R3R1=[001100010]is not equal toR1R3=[010001100].R3R1=010001100is not equal toR1R3=001100010.

Figure 9.8

Figure showing combining 3D rotations: left and right, the original L is labeled I for identity matrix. On the left, R1 is applied and then R3, the result is labeled R3R1. On the right, R3 is applied and then R1, the result is labeled R1R3. This shows that 3D rotations do not commute.

Combining 3D rotations: left and right, the original L is labeled I for identity matrix. On the left, R1 is applied and then R3, the result is labeled R3R1. On the right, R3 is applied and then R1, the result is labeled R1R3. This shows that 3D rotations do not commute.

Also helpful for understanding what is happening, is to track the transformation of a point p on L. Form the vector v = p − o, and let’s track

v=[001].v=001.

In Figure 9.8 on the left, observe the transformation of v:

R1v=[010]andR3R1v=[100].R1v=010andR3R1v=100.

Now, on the right, observe the transformation of v:

R3v=[001]andR1R3v=[010].R3v=001andR1R3v=010.

So it does matter which rotation we perform first!

9.10 Inverse Matrices

In Section 5.9, we saw how inverse matrices undo linear maps. A linear map A takes a vector v to its image v′. The inverse map, A−1, will take v′ back to v, i.e., A−1v′ = v or A−1Av = v. Thus, the combined action of A−1A has no effect on any vector v, which we can write as

A1A=I,(9.17)A1A=I,(9.17)

where I is the 3 × 3 identity matrix. If we applied A−1 to v first, and then applied A, there would not be any action either; in other words,

AA1=I,(9.18)

too.

A matrix is not always invertible. For example, the projections from Section 9.7 are rank deficient, and therefore not invertible. This is apparent from Sketch 9.9: once we flatten the vectors to ai to ai in 2D, there isn’t enough information available in the ai to return them to 3D.

As we discovered in Section 9.6 on rotations, orthogonal matrices, which are constructed from a set of orthonormal vectors, possess the nice property RT = R−1. Forming the reverse rotation is simple and requires no computation; this provides for a huge savings in computer graphics where rotating objects is a common operation.

Scaling also has an inverse, which is simple to compute. If

S=[s1,1000s2,2000s3,3],thenS1=[1/s1,10001/s2,20001/s3,3].

Here are more rules for matrices. These involve calculating with inverse matrices.

An=(A1)n=A1...A1n times(A1)1=A(kA)1=1kA1(AB)1=B1A1

See Section 12.4 for details on calculating A−1.

9.11 More on Matrices

A handful of matrix properties are explained and illustrated in Chapter 4. Here we restate them so they are conveniently together. These properties hold for n × n matrices (the topic of Chapter 12):

  • preserve scalings: A(cv) = cAv
  • preserve summations: A(u + v) = Au + Av
  • preserve linear combinations: A(au + bv) = aAu + bAv
  • distributive law: Av + Bv = (A + B)v
  • commutative law for addition: A + B = B + A
  • no commutative law for multiplication: ABBA.
  • associative law for addition: A + (B + C) = (A + B) + C
  • associative law for multiplication: A(BC) = (AB)C
  • distributive law: A(B+C) =AB+AC

    (B+C)A =BA+CA

Scalar laws:

  • a(B + C) = aB + aC
  • (a + b)C = aC + bC
  • (ab)C = a(bC)
  • a(BC) = (aB)C = B(aC)

Laws involving determinants:

  • |A| = |AT|
  • |AB| = |A| · |B|
  • |A| + |B| ≠ |A + B|
  • |cA| = cn|A|

Laws involving exponents:

  • Ar=A...Ar times
  • Ar+s = Ar As
  • Ars = (Ar)s
  • A0 = I

Laws involving the transpose:

  • [A + B]T = AT + BT
  • ATT=A
  • [cA]T = cAT
  • [AB]T = BTAT

image

  • 3D linear map
  • transpose matrix
  • linear space
  • vector space
  • subspace
  • linearity property
  • linear combination
  • linearly independent
  • linearly dependent
  • scale
  • action ellipsoid
  • rotation
  • rigid body motions
  • shear
  • reflection
  • projection
  • idempotent
  • orthographic projection
  • oblique projection
  • determinant
  • volume
  • scalar triple product
  • cofactor expansion
  • expansion by minors
  • inverse matrix
  • multiply matrices
  • noncommutative property of matrix multiplication
  • rules of matrix arithmetic

9.12 Exercises

  1. Let v′ = 3a1 + 2a2 + a3, where

    a1=[111],a2=[200],a3=[030].

    What is v′? Write this equation in matrix form.

  2. What is the transpose of the matrix

    A=[154120234]?

  3. Given a 2D linear subspace formed by vectors w and v, is u an element of that subspace?

    v=[100],w=[111],u=[001].

  4. Given

    v=[321],w=[112],u=[730],

    is u in the subspace defined by v and w?

  5. Is the vector u = vw in the subspace defined by v and w?
  6. Let V1 be the one-dimensional subspace defined by

    v=[110].

    What vector w′ in V1 is closest to

    w=[101]?

  7. The vectors v and w form a 2D subspace of ℝ3. Are they linearly dependent?
  8. What is the kernel of the matrix formed from linearly independent vectors v1, v2, v3?
  9. Describe the linear map given by the matrix

    [100001010]

    by stating if it is volume preserving and stating the action of the map. Hint: Examine where the ei-axes are mapped.

  10. What matrix scales by 2 in the e1-direction, scales by 1/4 in the e2-direction, and reverses direction and scales by 4 in the e3-direction? Map the unit cube with this matrix. What is the volume of the resulting parallelepiped?
  11. What is the matrix that reflects a vector about the plane x1 = x2? Map the unit cube with this matrix. What is the volume of the resulting parallelepiped?
  12. What is the shear matrix that maps

    [abc]to[00c]?

    Map the unit cube with this matrix. What is the volume of the resulting parallelepiped?

  13. What matrix rotates around the e1-axis by α degrees?
  14. What matrix rotates by 45° around the vector [101]?
  15. Construct the orthogonal projection matrix P that projects onto the line spanned by

    u=[1/31/31/3]

    and what is the action of the map, v′ = Pv? What is the action of the map on the following two vectors:

    v1=[111]andv2=[001]?

    What is the rank of this matrix? What is the determinant?

  16. Construct the projection matrix P that projects into the plane spanned by

    u1=[1/21/20]andu2=[001].

    What is the action of the map, v′ = Pv? What is the action of the map on the following vectors:

    v1=[111],v2=[100],v3=[110]?

    What is the rank of this matrix? What is the determinant?

  17. Given the projection matrix in Exercise 16, what is the projection direction?
  18. Given the projection matrix

    A=[101010000],

    what is the projection direction? What type of projection is it?

  19. What is the cofactor expansion of

    A=[123200101]?

    What is |A|?

  20. For scalar values c1, c2, c3 and a matrix A = [a1 a2 a3], what is the determinant of A1 = [c1a1 a2 a3], A2 = [c1a1 c2a2 a3], and A3 = [c1a1 c2a2 c3a3]?
  21. The matrix

    A=[123200111]

    is invertible and |A| =2. What is |A−1|?

  22. Compute

    [001120211] [154120234].

  23. What is AB and BA given

    A=[123200111]andB=[010111100]?

  24. What is AB and BA given

    A=[123200111]andB=[112201]?

  25. Find the inverse for each of the following matrices:

    rotation:[1/201/20101/201/2],scale:[1/20001/40002],projection:[101010000].

  26. For what type of matrix is A−1 = AT?
  27. What is the inverse of the matrix

    A=[582234]?

  28. If

    B=[010111100]andB1=[001100111],

    what is (3B)−1?

  29. For matrix A in Exercise 2, what is (AT)T?

1Actually, perspective maps are also needed here. They will be discussed in Section 10.5.

2We have shown this only for the unit cube, but it is true for any other object as well.

3However, we will use forward elimination for n × n systems. The sign of the determinant in (9.15) needs to be adjusted if pivoting is included in forward elimination. See Section 12.6 for details.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.142.119.114