The flight simulator is an important part in the training of airplane pilots. It has a real cockpit, but what you see outside the windows is computer imagery. As you take a right turn, the terrain below changes accordingly; as you dive downwards, it comes closer to you. When you change the (simulated) position of your plane, the simulation software must recompute a new view of the terrain, clouds, or other aircraft. This is done through the application of 3D affine and linear maps.1 Figure 9.1 shows an image that was generated by an actual flight simulator. For each frame of the simulated scene, complex 3D computations are necessary, most of them consisting of the types of maps discussed in this section.
The general concept of a linear map in 3D is the same as that for a 2D map. Let v be a vector in the standard [e1, e2, e3]-coordinate system, i.e.,
v=v1e1+v2e2+v3e3.
(See Sketch 9.1 for an illustration.)
Let another coordinate system, the [a1, a2, a3]-coordinate system, be given by the origin o and three vectors a1, a2, a3. What vector v′ in the [a1, a2, a3]-system corresponds to v in the [e1, e2, e3]-system? Simply the vector with the same coordinates relative to the [a1, a2, a3]-system. Thus,
v′=v1a1+v2a2+v3a3.(9.1)
This is illustrated by Sketch 9.2 and the following example.
Let
v=[112],a1=[201],a2=[010],a3=[001/2].
Then
v′=1⋅[201]+1⋅[010]+2⋅[001/2]=[212].
You should recall that we had the same configuration earlier for the 2D case—(9.1) corresponds directly to (4.2) of Section 4.1. In Section 4.2, we then introduced the matrix form. That is now an easy project for this chapter—nothing changes except the matrices will be 3 × 3 instead of 2 × 2. In 3D, a matrix equation looks like this:
v′=Av,(9.2)
i.e., just the same as for the 2D case. Written out in detail, there is a difference:
[v′1v′2v′3]=[a1,1a1,2a1,3a2,1a2,2a2,3a3,1a3,2a3,3] [v1v2v3].(9.3)
All matrix properties from Chapter 4 carry over almost verbatim.
Returning to our example, it is quite easy to condense it into a matrix equation:
[200010101/2] [112]=[212].
Again, if we multiply a matrix A by a vector v, the ith component of the result vector is obtained as the dot product of the ith row of A and v.
The matrix A represents a linear map. Given the vector v in the [e1, e2, e3]-system, there is a vector v′ in the [a1, a2, a3]-system such that v′ has the same components in the [a1, a2, a3]-system as did v in the [e1, e2, e3]-system. The matrix A finds the components of v′ relative to the [e1, e2, e3]-system.
With the 2 × 2 matrices of Section 4.2, we introduced the transpose AT of a matrix A. We will need this for 3 × 3 matrices, and it is obtained by interchanging rows and columns, i.e.,
[23−439−4−1−94]T=[23−139−9−4−44].
The boldface row of A has become the boldface column of AT. As a concise formula,
aTi,j=aj,i.
The set of all 3D vectors is referred to as a 3D linear space or vector space, and it is denoted as ℝ3. We associate with ℝ3 the operation of forming linear combinations. This means that if v and w are two vectors in this linear space, then any vector
u=sv+tw(9.4)
is also in this space. The vector u is then said to be a linear combination of v and w. This is also called the linearity property. Notice that the linear combination (9.4) combines scalar multiplication and vector addition. These are the key operations necessary for a linear space.
Select two vectors v and w and consider all vectors u that may be expressed as (9.4) with arbitrary scalars s, t. Clearly, all vectors u form a subset of all 3D vectors. But beyond that, they form a linear space themselves—a 2D space. For if two vectors u1 and u2 are in this space, then they can be written as
u1=s1v+t1wandu2=s2v+t2w,
and thus any linear combination of them can be written as
αu1+βu2=(αs1+βs2)v+(αt1+βt2)w,
which is again in the same space. We call the set of all vectors of the form (9.4) a subspace of the linear space of all 3D vectors. The term subspace is justified since not all 3D vectors are in it. Take for instance the vector n = v ∧ w, which is perpendicular to both v and w. There is no way to write this vector as a linear combination of v and w!
We say our subspace has dimension 2 since it is generated, or spanned, by two vectors. These vectors have to be noncollinear; otherwise, they just define a line, or a 1D (1-dimensional) subspace. (In Section 2.8, we needed the concept of a subspace in order to find the orthogonal projection of w onto v. Thus the projection lived in the one-dimensional subspace formed by v.)
If two vectors are collinear, then they are also called linearly dependent. If v and w are linearly dependent, then v = sw. Conversely, if they are not collinear, they are called linearly independent. If v1, v2, v3 are linearly independent, then we will not have a solution set s1, s2 for
v3=s1v1+s2v2,
and the only way to express the zero vector,
0=s1v1+s2v2+s3v3
is if s1 = s2 = s3 = 0. Three linearly independent vectors in ℝ3 span the entire space and the vectors are said to form a basis for ℝ3.
Given two linearly independent vectors v and w, how do we decide if another vector u is in the subspace spanned by v and w? Simple: check the volume of the parallelepiped formed by the three vectors, which is equivalent to calculating the scalar triple product (8.17) and checking if it is zero (within a round-off tolerance). In Section 9.8, we’ll introduce the 3 × 3 determinant, which is a matrix-oriented calculation of volume.
We’ll revisit this topic in a more abstract setting for n-dimensional vectors in Chapter 14.
A scaling is a linear map that enlarges or reduces vectors:
v′=[s1,1000s2,2000s3,3]v.(9.5)
If all scale factors si,i are larger than one, then all vectors are enlarged, as is done in Figure 9.2. If all si,i are positive yet less than one, all vectors are shrunk.
In this example,
[s1,1000s2,2000s3,3]=[1/300010003],
we shrink in the e1-direction, leave the e2-direction unchanged, and stretch the e3-direction. See Figure 9.3.
Negative numbers for the si,i will cause a flip in addition to a scale. So, for instance
[−20001000−1]
will stretch and reverse in the e1-direction, leave the e2-direction unchanged, and will reverse in the e3-direction.
How do scalings affect volumes? If we map the unit cube, given by the three vectors e1, e2, e3 with a scaling, we get a rectangular box. Its side lengths are s1,1 in the e1-direction, s2,2 in the e2-direction, and s3,3 in the e3-direction. Hence, its volume is given by s1,1 s2,2 s3,3. A scaling thus changes the volume of an object by a factor that equals the product of the diagonal elements of the scaling matrix.2
For 2 × 2 matrices in Chapter 4, we developed a geometric understanding of the map through illustrations of the action ellipse. For example, a nonuniform scaling is illustrated in Figure 4.3. In 3D, the same idea works as well. Now we can examine what happens to 3D unit vectors forming a sphere. They are mapped to an ellipsoid—the action ellipsoid! The action ellipsoid corresponding to Figure 9.2 is simply a sphere that is smaller than the unit sphere. The action ellipsoid corresponding to Figure 9.3 has its major axis in the e3-direction and its minor axis in the e1-direction. In Chapter 16, we will relate the shape of the ellipsoid to the linear map.
If we reflect a vector about the e2, e3-plane, then its first component should change in sign:
[v1v2v3]→[−v1v2v3],
as shown in Sketch 9.3.
This reflection is achieved by a scaling matrix:
[−v1v2v3]=[−100010001] [v1v2v3].
The following is also a reflection, as Sketch 9.4 shows:
[v1v2v3]→[v3v2v1].
It interchanges the first and third component of a vector, and is thus a reflection about the plane x1 = x3. (This is an implicit plane equation, as discussed in Section 8.4.)
This map is achieved by the following matrix equation:
[v3v2v1]=[001010100] [v1v2v3].
In Section 11.5, we develop a more general reflection matrix, called the Householder matrix. Instead of reflecting about a coordinate plane, with this matrix, we can reflect about a given (unit) normal. This matrix is central to the Householder method for solving a linear system in Section 13.1.
By their very nature, reflections do not change volumes—but they do change their signs. See Section 9.8 for more details.
What map takes a cube to the parallelepiped (skew box) of Sketch 9.5? The answer: a shear. Shears in 3D are more complicated than the 2D shears from Section 4.7 because there are so many more directions to shear. Let’s look at some of the shears more commonly used.
Consider the shear that maps e1 and e2 to themselves, and that also maps e3 to
a3=[ab1].
The shear matrix S1 that accomplishes the desired task is easily found:
S1=[10a01b001].
It is illustrated in Sketch 9.5 with a = 1 and b = 1, and in Figure 9.4. Thus this map shears parallel to the [e1, e2]-plane. Suppose we apply this shear to a vector v resulting in
v′=Sv=[v1+av3v2+bv3v3].
An ai,j element is a factor by which the jth component of v affects the ith component of v′.
What shear maps e2 and e3 to themselves, and also maps
[abc]to[a00]?
This shear is given by the matrix
S2=[100−ba10−ca01].(9.6)
One quick check gives:
[100−ba10−ca01] [abc]=[a00];
thus our map does what it was meant to do: it shears parallel to the [e2, e3]-plane. (This is the shear of the Gauss elimination step that we will encounter in Section 12.2.)
Although it is possible to shear in any direction, it is more common to shear parallel to a coordinate axis or coordinate plane. Try constructing a matrix for a shear parallel to the [e1, e3]-plane.
[1ab010001]
shears parallel to the e1-axis. Matrices for the other axes follow similarly.
How does a shear affect volume? For a geometric feeling, notice the simple shear S1 from above. It maps the unit cube to a skew box with the same base and the same height—thus it does not change volume! All shears are volume preserving. After reading Section 9.8, revisit these shear matrices and check the volumes for yourself.
Suppose you want to rotate a vector v around the e3-axis by 90° to a vector v′. Sketch 9.6 illustrates such a rotation:
v=[201]→v′=[021].
A rotation around e3 by different angles would result in different vectors, but they all will have one thing in common: their third components will not be changed by the rotation. Thus, if we rotate a vector around e3, the rotation action will change only its first and second components. This suggests another look at the 2D rotation matrices from Section 4.6. Our desired rotation matrix R3 looks much like the one from (4.16):
R3=[cosα−sinα0sinαcosα0001].(9.7)
Figure 9.5 illustrates the letter L rotated through several angles about the e3-axis.
Let us verify that R3 performs as promised with a = 90°:
[0−10100001] [201]=[021],
so it works!
Similarly, we may rotate around the e2-axis; the corresponding matrix is
R2=[cosα0sinα010−sinα0cosα].(9.8)
Notice the pattern here. The rotation matrix for a rotation about the ei-axis is characterized by the ith row being eTi
R1=[1000cosα−sinα0sinαcosα].(9.9)
The direction of rotation by a positive angle follows the right-hand rule: curl your fingers with the rotation, and your thumb points in the direction of the rotation axis.
If you examine the column vectors of a rotation matrix, you will see that each one is a unit length vector and they are orthogonal to each other. Thus, the column vectors form an orthonormal set of vectors, and a rotation matrix is an orthogonal matrix. (These properties hold for the row vectors of the matrix too.) As a result, we have that
RTR=IRT=R−1
Additionally, if R rotates by θ, then R−1 rotates by −θ.
How about a rotation by α degrees around an arbitrary vector a? The principle is illustrated in Sketch 9.7. The derivation of the following matrix is more tedious than called for here, so we just give the result:
R=[a21+C(1−a21)a1a2(1−C)−a3Sa1a3(1−C)+a2Sa1a2(1−C)+a3Sa22+C(1−a22)a2a3(1−C)−a1Sa1a3(1−C)−a2Sa2a3(1−C)+a1Sa23+C(1−a23)],(9.10)
where we have set C = cos α and S = sin α. It is necessary that ||a|| = 1 in order for the rotation to take place without scaling. Figure 9.6 illustrates two examples of rotations about an arbitrary axis.
With a complicated result such as (9.10), a sanity check is not a bad idea. So let α = 90°,
a=[001]andv=[100].
This means that we want to rotate v around a, or the e3-axis, by 90° as shown in Sketch 9.8. In advance, we know what R should be. In (9.10), C = 0 and S = 1, and we calculate
R=[0−10100001],
which is the expected matrix. We obtain
v′=[010].
With some confidence that (9.10) works, let’s try a more complicated example.
Let α = 90°
a=[1√31√31√3]andv=[100].
With C = 0 and S = 1 in (9.10), we calculate
R=[1313−1√313+1√313+1√31313−1√313−1√313+1√313],
We obtain
v′=[1313+1√313−1√3].
Convince yourself that ||v′|| = ||v||.
Continue this example with the vector
v=[111].
Surprised by the result?
It should be intuitively clear that rotations do not change volumes. Recall from 2D that rotations are rigid body motions.
Projections that are linear maps are parallel projections. There are two categories. If the projection direction is perpendicular to the projection plane then it is an orthogonal projection, otherwise it is an oblique projection.Two examples are illustrated in Figure 9.7, in which one of the key properties of projections is apparent: flattening. The orthogonal and oblique projection matrices that produced this figure are
[100010000]and[101/√2011/√2000],
respectively.
Projections are essential in computer graphics to view 3D geometry on a 2D screen. A parallel projection is a linear map, as opposed to a perspective projection, which is not. A parallel projection preserves relative dimensions of an object, thus it is used in drafting to produce accurate views of a design.
Recall from 2D, Section 4.8, that a projection reduces dimensionality and it is an idempotent map. It flattens geometry because a projection matrix P is rank deficient; in 3D this means that a vector is projected into a subspace, which can be a (2D) plane or (1D) line. The idempotent property leaves a vector in the subspace of the map unchanged by the map, Pv = P2v. Let’s see how to construct an orthogonal projection in 3D.
First we choose the subspace U into which we would like to project. If we want to project onto a line (1D subspace), specify a unit vector u1. If we want to project into a plane (2D subspace), specify two orthonormal vectors u1, u2. Now form a matrix Ak from the vectors defining the k-dimensional subspace U:
A1=u1orA2=[u1u2].
The projection matrix Pk is then defined as
Pk=AkATk.
It follows that P1 is very similar to the projection matrix from Section 4.8 except the projection line is in 3D,
P1=A1AT1=[u1,1u1u2,1u1u3,1u1].
Projection into a plane takes the form,
P2=A2AT2=[u1u2] [uT1uT2].(9.11)
Expanding this, we see the columns of P2 are linear combinations of u1 and u2,
P2=[u1,1u1+u1,2u2u2,1u1+u2,2u2u3,1u1+u3,2u2].
The action of P1 and P2 is thus
P1v=(u⋅v)u,(9.12)
P2v=(u1⋅v)u1+(u2⋅v)u2.(9.13)
An application of these projections is demonstrated in the Gram-Schmidt orthonormal coordinate frame construction in Section 11.8.
Let’s construct an orthogonal projection P2 into the [e1, e2]-plane. Although this example is easy enough to write down the matrix directly, let’s construct it with (9.11),
P2=[e1e2] [eT1eT2]=[100010000]
The action achieved by this linear map is
[v1v20]=[100010000] [v1v2v3].
See Sketch 9.9 and Figure 9.7 (left).
The example above is very simple, and we can immediately see that the projection direction is d = [0 0 ± 1]T. This vector satisfies the equation
P2d=0,
and we see that the projection direction is in the kernel of the map.
The idempotent property for P2 is easily understood by noticing that AT2A2
P22=A2AT2A2AT2=[u1u2] [uT1uT2] [u1u2] [uT1uT2]=[u1u2]I[uT1uT2]=P2.
In addition to being idempotent, orthogonal projection matrices are symmetric. The action of the map is Pv and this vector is orthogonal to v − Pv, thus
0=(Pv)T(v−Pv)=vT(PT−PTP)v,
from which we conclude that P = PT.
We will examine oblique projections in the context of affine maps in Section 10.4. Finally, we note that a projection has a significant effect on the volume of an object. Since everything is flat after a projection, it has zero 3D volume.
Most linear maps change volumes; some don’t. Since this is an important aspect of the action of a map, this section will discuss the effect of a linear map on volume. The unit cube in the [e1, e2, e3]-system has volume one. A linear map A will change that volume to that of the skew box spanned by the images of e1, e2, e3, i.e., by the volume spanned by the vectors a1, a2, a3—the column vectors of A. What is the volume spanned by a1, a2, a3?
First, let’s look at what we have done so far with areas and volumes. Recall the 2 × 2 determinant from Section 4.9. Through Sketch 4.8, the area of a 2D parallelogram was shown to be equivalent to a determinant. In fact, in Section 8.2 it was shown that the cross product can be used to calculate this area for a parallelogram embedded in 3D. With a very geometric approach, the scalar triple product of Section 8.5 gives us the means to calculate the volume of a parallelepiped by simply using a “base area times height” calculation. Let’s revisit that formula and look at it from the perspective of linear maps.
So, using linear maps, we want to illustrate that the volume of the parallelepiped, or skew box, simply reduces to a 3D determinant calculation. Proceeding directly with a sketch in the 3D case would be difficult to follow. For 3D, let’s augment the determinant idea with the tools from Section 5.4. There we demonstrated how shears— area-preserving linear maps—can be used to transform a matrix to upper triangular. These are the forward elimination steps of Gauss elimination.
First, let’s introduce a 3 × 3 determinant of a matrix A. It is easily remembered as an alternating sum of 2 × 2 determinants:
|A|=a1,1|a2,2a2,3a3,2a3,3|−a2,1|a1,2a1,3a3,2a3,3|+a3,1|a1,2a1,3a2,2a2,3|.(9.14)
The representation in (9.14) is called the cofactor expansion. Each (signed) 2 × 2 determinant is the cofactor of the ai,j it is paired with in the sum. The sign comes from the factor (−1)i+j. For example, the cofactor of a2,1 is
(−1)2+1|a1,2a1,3a3,2a3,3|.
The cofactor is also written as (−1)i+jMi,j where Mi,j is called the minor of ai,j. As a result, (9.14) is also known as expansion by minors. We’ll look into this method more in Section 12.6.
If (9.14) is expanded, then an interesting form for writing the determinant arises. The formula is nearly impossible to remember, but the following trick is not. Copy the first two columns after the last column. Next, form the product of the three “diagonals” and add them. Then, form the product of the three “antidiagonals” and subtract them. The three “plus” products may be written as:
a1,1a1,2a1,3□□□a2,2a2,3a2,1□□□a3,3a3,1a3,2
and the three “minus” products as:
□□a1,3a1,1a1,2□a2,2a2,3a2,1□a3,1a3,2a3,3□□.
The complete formula for the 3 × 3 determinant is
|A|=a1,1a2,2a3,3+a1,2a2,3a3,1+a1,3a2,1a3,2−a3,1a2,2a1,3−a3,2a2,3a1,1−a3,3a2,1a1,2.
What is the volume spanned by the three vectors
a1=[400],a2=[−144],a3=[0.1−0.10.1]?
All we have to do is to compute
det[a1,a2,a3]=4|4−0.140.1|=4(4×0.1)−(−0.1)×4)=3.2.
(Here we used an alternative notation, det, for the determinant.) In this computation, we did not write down zero terms.
As we have seen in Section 9.5, a 3D shear preserves volume. Therefore, we can apply a series of shears to the matrix A, resulting in a new matrix
˜A=[˜a1,1˜a1,2˜a1,30˜a2,2˜a2,300˜a3,3].
The determinant of ˜A
|˜A|=˜a1,1˜a2,2˜a3,3,(9.15)
with of course, |A|=|˜A|
Let’s continue with Example 9.8. One simple row operation, row3 = row3 − row2, will achieve the upper triangular matrix
˜A=[4−10.104−0.1000.2],
and we can determine that |˜A|=|A|
For 3 × 3 matrices, we don’t actually calculate the volume of three vectors by proceeding with the forward elimination steps, or shears.3 We would just directly calculate the 3 × 3 determinant from (9.14). What is interesting about this development is now we can illustrate, as in Sketch 9.10, how the determinant defines the volume of the skew box. The first two column vectors of ˜A
Let’s conclude this section with some rules for determinants. Suppose we have two 3 × 3 matrices, A and B. The column vectors of A are [a1a2a3]
Multiples of rows can be added together without changing the determinant. For example, the shears of Gauss elimination do not change the determinant, as we observed in the simple example above.
The determinant of the shear matrix S2 in (9.6) is one, thus |S2A| = |S2||A| = |A|.
|A−1|=1|A|.
If we apply a linear map A to a vector v and then apply a map B to the result, we may write this as
v′=BAv.
Matrix multiplication is defined just as in the 2D case; the element ci,j of the product matrix C = BA is obtained as the dot product of the ith row of B with the jth column of A. A handy way to write the matrices so as to keep the dot products in order is
Instead of a complicated formula, an example should suffice.
We have computed the dot product of the boldface row of B and column of A to produce the boldface entry of C. In this example, B and A are 3 × 3 matrices, and thus the result is another 3 × 3 matrix. In the example in Section 9.1, a 3 × 3 matrix A is multiplied by a 3 × 1 matrix (vector) v resulting in a 3 × 1 matrix or vector. Thus two matrices need not be the same size in order to multiply them. There is a rule, however! Suppose we are to multiply two matrices A and B together as AB. The sizes of A and B are
m×nandn×p,(9.16)
respectively. The resulting matrix will be of size m × p—the “outside” dimensions in (9.16). In order to form AB, it is necessary that the “inside” dimensions, both n here, be equal. The matrix multiplication scheme from Section 4.2 simplifies hand-calculations by illustrating the resulting dimensions.
As in the 2D case, matrix multiplication does not commute! That is, AB ≠ BA in most cases. An interesting difference between 2D and 3D is the fact that in 2D, rotations did commute; however, in 3D they do not. For example, in 2D, rotating first by α and then by β is no different from doing it the other way around. In 3D, that is not the case. Let’s look at an example to illustrate this point.
Let’s look at a rotation by −90° around the e1-axis with matrix R1 and a rotation by −90° around the e3-axis with matrix R3:
R1=[1000010−10]andR3=[010−100001].
Figure 9.8 illustrates what the algebra tells us:
R3R1=[001−1000−10]is not equal toR1R3=[010001100].
Also helpful for understanding what is happening, is to track the transformation of a point p on L. Form the vector v = p − o, and let’s track
v=[001].
In Figure 9.8 on the left, observe the transformation of v:
R1v=[010]andR3R1v=[100].
Now, on the right, observe the transformation of v:
R3v=[001]andR1R3v=[010].
So it does matter which rotation we perform first!
In Section 5.9, we saw how inverse matrices undo linear maps. A linear map A takes a vector v to its image v′. The inverse map, A−1, will take v′ back to v, i.e., A−1v′ = v or A−1Av = v. Thus, the combined action of A−1A has no effect on any vector v, which we can write as
A−1A=I,(9.17)
where I is the 3 × 3 identity matrix. If we applied A−1 to v first, and then applied A, there would not be any action either; in other words,
AA−1=I,(9.18)
too.
A matrix is not always invertible. For example, the projections from Section 9.7 are rank deficient, and therefore not invertible. This is apparent from Sketch 9.9: once we flatten the vectors to ai to a′i in 2D, there isn’t enough information available in the a′i to return them to 3D.
As we discovered in Section 9.6 on rotations, orthogonal matrices, which are constructed from a set of orthonormal vectors, possess the nice property RT = R−1. Forming the reverse rotation is simple and requires no computation; this provides for a huge savings in computer graphics where rotating objects is a common operation.
Scaling also has an inverse, which is simple to compute. If
S=[s1,1000s2,2000s3,3],thenS−1=[1/s1,10001/s2,20001/s3,3].
Here are more rules for matrices. These involve calculating with inverse matrices.
A−n=(A−1)n=A−1⋅...⋯A−1︸n times(A−1)−1=A(kA)−1=1kA−1(AB)−1=B−1A−1
See Section 12.4 for details on calculating A−1.
A handful of matrix properties are explained and illustrated in Chapter 4. Here we restate them so they are conveniently together. These properties hold for n × n matrices (the topic of Chapter 12):
distributive law: A(B+C) =AB+AC
(B+C)A =BA+CA
Scalar laws:
Laws involving exponents:
Laws involving the transpose:
Let v′ = 3a1 + 2a2 + a3, where
a1=[111],a2=[200],a3=[030].
What is v′? Write this equation in matrix form.
What is the transpose of the matrix
A=[15−4−1−2023−4]?
Given a 2D linear subspace formed by vectors w and v, is u an element of that subspace?
v=[100],w=[111],u=[001].
Given
v=[32−1],w=[1−12],u=[730],
is u in the subspace defined by v and w?
Let V1 be the one-dimensional subspace defined by
v=[110].
What vector w′ in V1 is closest to
w=[101]?
Describe the linear map given by the matrix
[10000−1010]
by stating if it is volume preserving and stating the action of the map. Hint: Examine where the ei-axes are mapped.
What is the shear matrix that maps
[abc]to[00c]?
Map the unit cube with this matrix. What is the volume of the resulting parallelepiped?
Construct the orthogonal projection matrix P that projects onto the line spanned by
u=[1/√31/√31/√3]
and what is the action of the map, v′ = Pv? What is the action of the map on the following two vectors:
v1=[111]andv2=[001]?
What is the rank of this matrix? What is the determinant?
Construct the projection matrix P that projects into the plane spanned by
u1=[1/√21/√20]andu2=[001].
What is the action of the map, v′ = Pv? What is the action of the map on the following vectors:
v1=[111],v2=[100],v3=[1−10]?
What is the rank of this matrix? What is the determinant?
Given the projection matrix
A=[10−1010000],
what is the projection direction? What type of projection is it?
What is the cofactor expansion of
A=[123200101]?
What is |A|?
The matrix
A=[123200111]
is invertible and |A| =2. What is |A−1|?
Compute
[0011−20−211] [15−4−1−2023−4].
What is AB and BA given
A=[123200111]andB=[010111100]?
What is AB and BA given
A=[123200111]andB=[112201]?
Find the inverse for each of the following matrices:
rotation:[1/√201/√2010−1/√201/√2],scale:[1/20001/40002],projection:[10−1010000].
What is the inverse of the matrix
A=[582234]?
If
B=[010111100]andB−1=[001100−11−1],
what is (3B)−1?
1Actually, perspective maps are also needed here. They will be discussed in Section 10.5.
2We have shown this only for the unit cube, but it is true for any other object as well.
3However, we will use forward elimination for n × n systems. The sign of the determinant in (9.15) needs to be adjusted if pivoting is included in forward elimination. See Section 12.6 for details.
3.142.119.114