Chapter 5

2 × 2 Linear Systems

Just about anybody can solve two equations in two unknowns by somehow manipulating the equations. In this chapter, we will develop a systematic way for finding the solution, simply by checking the underlying geometry. This approach will later enable us to solve much larger systems of equations in Chapters 12 and 13. Figure 5.1 illustrates many instances of intersecting two lines: a problem that can be formulated as a 2 × 2 linear system.

Figure 5.1

Figure showing intersection of lines: two families of lines are shown; the intersections of corresponding line pairs are marked by black boxes. For each intersection, a 2 × 2 linear system has to be solved.

Intersection of lines: two families of lines are shown; the intersections of corresponding line pairs are marked by black boxes. For each intersection, a 2 × 2 linear system has to be solved.

5.1 Skew Target Boxes Revisited

In our standard [e1, e2]-coordinate system, suppose we are given two vectors a1 and a2. In Section 4.1, we showed how these vectors define a skew target box with its lower-left corner at the origin. As illustrated in Sketch 5.1, suppose we also are given a vector b with respect to the [e1, e2]-system. Now the question arises, what are the components of b with respect to the [a1, a2]-system? In other words, we want to find a vector u with components u1 and u2, satisfying

u1a1+u2a2=b.(5.1)

Sketch 5.1

Sketch showing geometry of a 2 × 2 system.

Geometry of a 2 × 2 system.

Example 5.1

Before we proceed further, let’s look at an example. Following Sketch 5.1, let

a1=[21],a2=[46],b=[44].

Upon examining the sketch, we see that

1×[21]+12×[46]=[44].

In the [a1, a2]-system, b has components (1, 1/2). In the [e1, e2]- system, it has components (4,4).

What we have here are really two equations in the two unknowns u1 and u2 , which we see by expanding the vector equations into

2u1+4u2=4u1+6u2=4.(5.2)

And as we saw in Example 5.1, these two equations in two unknowns have the solution u1 = 1 and u2 = 1/2, as is seen by inserting these values for u1 and u2 into the equations.

Being able to solve two simultaneous sets of equations allows us to switch back and forth between different coordinate systems. The rest of this chapter is dedicated to a detailed discussion of how to solve these equations.

5.2 The Matrix Form

The two equations in (5.2) are also called a linear system. It can be written more compactly if we use matrix notation:

[2416] [u1u2]=[44].(5.3)

In general, a 2 × 2 linear system looks like this:

[a1,1a1,2a2,1a2,2] [u1u2]=[b1b2].(5.4)

Equation (5.4) is shorthand notation for the equations

a1,1u1+a1,2u2=b1,(5.5)

a2,1u1+a2,2u2=b2.(5.6)

We sometimes write it even shorter, using a matrix A:

Au=b,(5.7)

where

A=[a1,1a1,2a2,1a2,2],u=[u1u2],b=[b1b2].

Both u and b represent vectors, not points! (See Sketch 5.1 for an illustration.) The vector u is called the solution of the linear system.

While the savings of this notation is not completely obvious in the 2 × 2 case, it will save a lot of work for more complicated cases with more equations and unknowns.

The columns of the matrix A correspond to the vectors a1 and a2. We could then rewrite our linear system as (5.1). Geometrically, we are trying to express the given vector b as a linear combination of the given vectors a1 and a2; we need to determine the factors u1 and u2. If we are able to find at least one solution, then the linear system is called consistent, otherwise it is called inconsistent. Three possibilities for our solution space exist.

  1. There is exactly one solution vector u. In this case, |A| ≠ 0, thus the matrix has full rank and is nonsingular.
  2. There is no solution, or in other words, the system is inconsistent. (See Section 5.6 for a geometric description.)
  3. There are infinitely many solutions. (See Sections 5.7 and 5.8 for examples.)

5.3 A Direct Approach: Cramer’s Rule

Sketch 5.2 offers a direct solution to our linear system. By inspecting the areas of the parallelograms in the sketch, we see that

u1=area(b,a2)area(a1,a2),u2=area(a1,b)area(a1,a2).

Sketch 5.2

Sketch showing cramer’s rule.

Cramer’s rule.

An easy way to see how these ratios of areas correspond to u1 and u2 is to shear the parallelograms formed by b, a2 and b, a1 onto the a1 and a2 axes, respectively. (Shears preserve areas.) The area of a parallelogram is given by the determinant of the two vectors spanning it. Recall from Section 4.9 that this is a signed area. This method of solving for the solution of a linear system is called Cramer’s rule.

Example 5.2

Applying Cramer’s rule to the linear system in (5.3), we get

u1=|4446||2416|=88,u2=|2414||2416|=48.

Examining the determinant in the numerator, notice that b replaces a1 in the solution for u1 and then b replaces a2 in the solution for u2.

Notice that if the area spanned by a1 and a2 is zero, that is, the vectors are multiples of each other, then Cramer’s rule will not result in a solution. (See Section 5.6, Section 5.7 and Section 5.8 for more information on this situation.)

Cramer’s rule is primarily of theoretical importance. For larger systems, Cramer’s rule is both expensive and numerically unstable. Hence, we now study a more effective method.

5.4 Gauss Elimination

Let’s consider a special 2 × 2 linear system:

[a1,1a1,20a2,2]u=b.(5.8)

This situation is shown in Sketch 5.3. This matrix is called upper triangular because all elements below the diagonal are zero, forming a triangle of numbers above the diagonal.

Sketch 5.3

Sketch showing a special linear system.

A special linear system.

We can solve this system without much work. Examining Equation (5.8), we see it is possible to solve for

u2=b2/a2,2.

With u2 in hand, we can solve the first equation from (5.5) for

u1=1a1,1(b1u2a1,2).

This technique of solving the equations from the bottom up is called back substitution.

Notice that the process of back substitution requires divisions. Therefore, if the diagonal elements, a1,l or a2,2, equal zero then the algorithm will fail. This type of failure indicates that the columns of A are not linearly independent. (See Section 5.6, Section 5.7 and Section 5.8 for more information on this situation.) Because of the central role that the diagonal elements play in Gauss elimination, they are called pivots.

In general, we will not be so lucky to encounter an upper triangular system as in (5.8). But any linear system in which A is nonsingular may be transformed to this simple form, as we shall see by reexamining the system in (5.3). We write it as

u1[21]+u2[46]=[44].

This situation is shown in Sketch 5.4. Clearly, a1 is not on the e1-axis as we would like, but we can apply a stepwise procedure so that it will become just that. This systematic, stepwise procedure is called forward elimination. The process of forward elimination followed by back substitution is called Gauss elimination.1

Sketch 5.4

Sketch showing the geometry of a linear system.

The geometry of a linear system.

Recall one key fact from Chapter 4: linear maps do not change linear combinations. That means if we apply the same linear map to all vectors in our system, then the factors u1 and u2 won’t change. If the map is given by a matrix S, then

S[u1[21]+u2[46]]=S[44].

In order to get a1 to line up with the e1-axis, we will employ a shear parallel to the e2-axis, such that

[21]is mapped to[20].

That shear (see Section 4.7) is given by the matrix

S1=[101/21].

We apply S1 to all vectors involved in our system:

[101/21] [21]=[20],[101/21] [46]=[44],[101/21] [44]=[42].

The effect of this map is shown in Sketch 5.5.

Sketch 5.5

Sketch showing shearing the vectors in a linear system.

Shearing the vectors in a linear system.

Our transformed system now reads

[2404] [u1u2]=[42].

Now we can employ back substitution to find

u2=2/4=1/2.u1=12(44×12)=1

For 2 × 2 linear systems there is only one matrix entry to zero in the forward elimination procedure. We will restate the procedure in a more algorithmic way in Chapter 12 when there is more work to do.

Example 5.3

We will look at one more example of forward elimination and back substitution. Let a linear system be given by

[1422] [u1u2]=[02].

The shear that takes a1 to the e1-axis is given by

S1=[1021],

and it transforms the system to

[14010] [u1u2]=[02].

Draw your own sketch to understand the geometry.

Using back substitution, the solution is now easily found as u1 = 8/10 and u2 = 2/10.

5.5 Pivoting

Consider the system

[0110] [u1u2]=[11],

illustrated in Sketch 5.6.

Sketch 5.6

Sketch showing a linear system that needs pivoting.

A linear system that needs pivoting.

Our standard approach, shearing a1 onto the e1-axis, will not work here; there is no shear that takes

[01]

onto the e1-axis. However, there is no problem if we simply exchange the two equations! Then we have

[1001] [u1u2]=[11],

and thus u1 = u2 = 1. So we cannot blindly apply a shear to a1; we must first check that one exists. If it does not—i.e., if a1,1 = 0—exchange the equations.

As a rule of thumb, if a method fails because some number equals zero, then it will work poorly if that number is small. It is thus advisable to exchange the two equations anytime we have |a1,1| < |a2,1|. The absolute value is used here since we are interested in the magnitude of the involved numbers, not their sign. The process of exchanging equations (rows) so that the pivot is the largest in absolute value is called row pivoting or partial pivoting, and it is used to improve numerical stability. In Section 5.8, a special type of linear system is introduced that sometimes needs another type pivoting. However, since row pivoting is the most common, we’ll refer to this simply as “pivoting.”

Example 5.4

Let’s study an example taken from [11]:

[0.0001111] [u1u2]=[12].

If we shear a1 onto the e1-axis, thus applying one forward elimination step, the new system reads

[0.0001109999] [u2u2]=[19998].

Performing back substitution, we find the solution is

ut=[1.00010.99989_],

which we will call the “true” solution. Note the magnitude of changes in a2 and b relative to a1. This is the type of behavior that causes numerical problems. It can often be dealt with by using a larger number of digits.

Suppose we have a machine that stores only three digits, although it calculates with six digits. Due to round-off, the system above would be stored as

[0.00011010000] [u1u2]=[110000],

which would result in a “round-off” solution of

ur=[01],

which is not very close to the true solution ut, as

||utur||=1.0001.

Pivoting is a tool to damper the effects of round-off. Now employ pivoting by exchanging the rows, yielding the system

[110.00011] [u1u2]=[21].

Shear a1 onto the e1-axis, and the new system reads

[1100.9999] [u1u2]=[20.9998],

which results in the “pivoting solution”

up=[11].

Notice that the vectors of the linear systems are all within the same range. Even with the three-digit machine, this system will allow us to compute a result that is closer to the true solution because the effects of round-off have been minimized. Now the error is

||utup||=0.00014.

Numerical strategies are the primary topic of numerical analysis, but they cannot be ignored in the study of linear algebra. Because this is an important real-world topic, we will revisit it. In Section 12.2 we will present Gauss elimination with pivoting integrated into the algorithm. In Section 13.4 we will introduce the condition number of a matrix, which is a measure for closeness to being singular. Chapters 13 and 16 will introduce other methods for solving linear systems that are better to use when numerical issues are a concern.

5.6 Unsolvable Systems

Consider the situation shown in Sketch 5.7. The two vectors a1 and a2 are multiples of each other. In other words, they are linearly dependent.

Sketch 5.7

Sketch showing an unsolvable linear system.

An unsolvable linear system.

The corresponding linear system is

[2142] [u1u2]=[11].

It is obvious from the sketch that we have a problem here, but let’s just blindly apply forward elimination; apply a shear such that a1 is mapped to the e1-axis. The resulting system is

[2100] [u1u2]=[11].

But the last equation reads 0 = −1, and now we really are in trouble! This means that our system is inconsistent, and therefore does not have a solution.

It is possible however, to find an approximate solution. This is done in the context of least squares methods, see Section 12.7.

5.7 Underdetermined Systems

Consider the system

[2142] [u1u2]=[36],

shown in Sketch 5.8.

Sketch 5.8

Sketch showing an underdetermined linear system.

An underdetermined linear system.

We shear a1 onto the e1-axis, and obtain

[2100] [u1u2]=[30].

Now the last equation reads 0 = 0—true, but a bit trivial! In reality, our system is just one equation written down twice in slightly different forms. This is also clear from the sketch: b may be written as a multiple of either a1 or a2, thus the system is underdetermined. This type of system is consistent because at least one solution exists. We can find a solution by setting u2 = 1, and then back substitution results in u1 = 1.

5.8 Homogeneous Systems

A system of the form

Au=0,(5.9)

i.e., one where the right-hand side consists of the zero vector, is called homogeneous. One obvious solution is the zero vector itself; this is called the trivial solution and is usually of little interest. If it has a solution u that is not the zero vector, then clearly all multiples cu are also solutions: we multiply both sides of the equations by a common factor c. In other words, the system has an infinite number of solutions.

Not all homogeneous systems do have a nontrivial solution, however. Equation (5.9) may be read as follows: What vector u, when mapped by A, has the zero vector as its image? The only 2 × 2 maps capable of achieving this have rank 1. They are characterized by the fact that their two columns a1 and a2 are parallel, or linearly dependent. If the system has only the trivial solution, then A is invertible.

Example 5.5

An example, illustrated in Sketch 5.9, should help. Let our homogeneous system be

[1224]u=[00].

Sketch 5.9

Sketch showing homogeneous system with nontrivial solution.

Homogeneous system with nontrivial solution.

Clearly, a2 = 2a1; the matrix A maps all vectors onto the line defined by a1 and the origin. In this example, any vector u that is perpendicular to a1 will be projected to the zero vector:

A[cu]=0.

After one step of forward elimination, we have

[1200]u=[00].

Any u2 solves the last equation. So let’s pick u2 = 1. Back substitution then gives u1 = −2, therefore

u=[21]

is a solution to the system; so is any multiple of it. Also check that a1 · u = 0, so they are in fact perpendicular.

All vectors u that satisfy a homogeneous system make up the kernel or null space of the matrix.

Example 5.6

We now consider an example of a homogeneous system that has only the trivial solution:

[1221]u=[00].

The two columns of A are linearly independent; therefore, A does not reduce dimensionality. Then it cannot map any nonzero vector u to the zero vector!

This is clear after one step of forward elimination,

[1203]u=[00],

and back substitution results in

u=[00].

In general, we may state that a homogeneous system has nontrivial solutions (and therefore, infinitely many solutions) only if the columns of the matrix are linearly dependent.

In some situations, row pivoting will not prepare the linear system for back substitution, necessitating column pivoting. When columns are exchanged, the corresponding unknowns must be exchanged as well.

Example 5.7

This next linear system might seem like a silly one to pose; however, systems of this type do arise in Section 7.3 in the context of finding eigenvectors:

[01/200]u=0.

In order to apply back substitution to this system, column pivoting is necessary, thus the system becomes

[1/2000] [u2u1]=0.

Now we set u1 = 1 and proceed with back substitution to find that u2 = 0. All vectors of the form

u=c[10]

are solutions.

5.9 Undoing Maps: Inverse Matrices

In this section, we will see how to undo a linear map. Reconsider the linear system

Au=b.

The matrix A maps u to b. Now that we know u, what is the matrix B that maps b back to u,

u=Bb?(5.10)

Defining B—the inverse map—is the purpose of this section.

In solving the original linear system, we applied shears to the column vectors of A and to b. After the first shear, we had

S1Au=S1b.

This demonstrated how shears can be used to zero elements of the matrix. Let’s return to the example linear system in (5.3). After applying S1 the system became

[2404] [u1u2]=[42].

Let’s use another shear to zero the upper right element. Geometrically, this corresponds to constructing a shear that will map the new a2 to the e2-axis. It is given by the matrix

S2=[1101].

Applying it to all vectors gives the new system

[2004] [u1u2]=[22].

After the second shear, our linear system has been changed to

S2S1Au=S2S1b.

Next, apply a nonuniform scaling S3 in the e1 and e2 directions that will map the latest a1 and a2 onto the vectors e1 and e2. For our current example,

S3=[1/2001/4].

The new system becomes

[1001] [u1u2]=[11/2],

which corresponds to

S3S2S1Au=S3S2S1b.

This is a very special system. First of all, to solve for u is now trivial because A has been transformed into the unit matrix or identity matrix I,

I=[1001].(5.11)

This process of transforming A until it becomes the identity is theoretically equivalent to the back substitution process of Section 5.4. However, back substitution uses fewer operations and thus is the method of choice for solving linear systems.

Yet we have now found the matrix B in (5.10)! The two shears and scaling transformed A into the identity matrix I:

S3S2S1A=I;(5.12)

thus, the solution of the system is

u=S3S2S1b.(5.13)

This leads to the inverse matrix A−1 of a matrix A:

A1=S3S2S1.(5.14)

The matrix A−1 undoes the effect of the matrix A: the vector u was mapped to b by A, and b is mapped back to u by A−1. Thus, we can now write (5.13) as

u=A1b.

If this transformation result can be achieved, then A is called invertible. At the end of this section and in Sections 5.6 and 5.7, we discuss cases in which A−1 does not exist.

If we combine (5.12) and (5.14), we immediately get

A1A=I.(5.15)

This makes intuitive sense, since the actions of a map and its inverse should cancel out, i.e., not change anything—that is what I does! Figures 5.2 and 5.3 illustrate this. Then by the definition of the inverse,

AA1=I.

Figure 5.2

Figure showing inverse matrices: illustrating scaling and its inverse, and that AA−1 = A−1A = I. Top: the original Phoenix, the result of applying a scale, then the result of the inverse scale. Bottom: the original Phoenix, the result of applying the inverse scale, then the result of the original scale.

Inverse matrices: illustrating scaling and its inverse, and that AA−1 = A−1A = I. Top: the original Phoenix, the result of applying a scale, then the result of the inverse scale. Bottom: the original Phoenix, the result of applying the inverse scale, then the result of the original scale.

Figure 5.3

Figure showing inverse matrices: illustrating a shear and its inverse, and that AA−1 = A−1A = I. Top: the original Phoenix, the result of applying a shear, then the result of the inverse shear. Bottom: the original Phoenix, the result of applying the inverse shear, then the result of the original shear.

Inverse matrices: illustrating a shear and its inverse, and that AA−1 = A−1A = I. Top: the original Phoenix, the result of applying a shear, then the result of the inverse shear. Bottom: the original Phoenix, the result of applying the inverse shear, then the result of the original shear.

If A−1 exits, then it is unique.

The inverse of the identity is the identity

I1=I.

The inverse of a scaling is given by:

[s00t]1=[1/s001/t].

Multiply this out to convince yourself.

Figure 5.2 shows the effects of a matrix and its inverse for the scaling

[1000.5].

Figure 5.3 shows the effects of a matrix and its inverse for the shear

[1101].

We consider the inverse of a rotation as follows: if Rα rotates by α degrees counterclockwise, then Rα rotates by α degrees clockwise, or

Rα=Rα1=RαT,

as we can see from the definition of a rotation matrix (4.16).

The rotation matrix is an example of an orthogonal matrix. An orthogonal matrix A is characterized by the fact that

A1=AT.

The column vectors a1 and a2 of an orthogonal matrix satisfy ||a1|| = 1, ||a2|| = 1 and a1 · a2 = 0. In words, the column vectors are orthonormal. The row vectors are orthonormal as well. Those transformations that are described by orthogonal matrices are called rigid body motions. The determinant of an orthogonal matrix is ±1.

We add without proof two fairly obvious identities that involve the inverse:

A11=A,(5.16)

which should be obvious fromFigures 5.2 and 5.3, and

A1T=AT1(5.17)

Figure 5.4 illustrates this for

A=[1010.5].

Figure 5.4

Figure showing inverse matrices: the top illustrates I,A−1,A−1T and the bottom illustrates I, AT, AT−1.

Inverse matrices: the top illustrates I,A1,A1T and the bottom illustrates I, AT, AT−1.

Given a matrix A, how do we compute its inverse? Let us start with

AA1=I.(5.18)

If we denote the two (unknown) columns of A−1 by a¯1 and a¯2, and those of I by e1 and e2, then (5.18) may be written as

A[a¯1a¯2]=[e1e2].

This is really short for two linear systems

Aa¯1=e1andAa¯2=e2.

Both systems have the same matrix A and can thus be solved simultaneously. All we have to do is to apply the familiar shears and scale—those that transform A to I—to both e1 and e2.

Example 5.8

Let’s revisit Example 5.3 with

A=[1422].

Our two simultaneous systems are:

[1422][a¯1a¯2]=[1001].

The first shear takes this to

[14010][a¯1a¯2]=[1021].

The second shear yields

[10010][a¯1a¯2]=[2/10 4/1021].

Finally the scaling produces

[1001][a¯1a¯2]=[2/104/102/101/10].

Thus the inverse matrix

A1=[2/104/102/101/10].

It can be the case that a matrix A does not have an inverse. For example, the matrix

[2142]

is not invertible because the columns are linearly dependent (and therefore the determinant is zero). A noninvertible matrix is also referred to as singular. If we try to compute the inverse by setting up two simultaneous systems,

[2142][a¯1a¯2]=[1001],

then the first shear produces

[2100][a¯1a¯2]=[1021].

At this point it is clear that we will not be able to construct linear maps to achieve the identity matrix on the left side of the equation. Examples of singular matrices were introduced in Section 5.6, Section 5.7 and Section 5.8.

5.10 Defining a Map

Matrices map vectors to vectors. If we know the result of such a map, namely that two vectors v1 and v2 were mapped to v1 and v2, can we find the matrix that did it?

Suppose some matrix A was responsible for the map. We would then have the two equations

Av1=v1andAv2=v2.

Combining them, we can write

A[v1,v2]=[v1,v2],

or, even shorter,

AV=V.(5.19)

To define A, we simply find V −1, then

A=VV1.

Of course v1 and v2 must be linearly independent for V−1 to exist. If the vi and vi are each linearly independent, then A represents a change of basis.

Example 5.9

Let’s find the linear map A that maps the basis V formed by vectors

v1[11]andv2[11]

and the basis V′ formed by vectors

v1=[11]andv2=[11]

as illustrated in Sketch 5.10.

Sketch 5.10

Sketch showing change of basis.

Change of basis.

First, we find V−1 following the steps in Example 5.8, resulting in

V1=[1/21/21/21/2].

The change of basis linear map is

A=VV1=[1111] [1/21/21/21/2]=[1001].

Check that vi is mapped to vi. If we have any vector v in the V basis, this map will return the coordinates of its corresponding vector v′ in V′. Sketch 5.10 illustrates that v = [0 1]T is mapped to v′ = [0 − 1]T.

In Section 6.5 we’ll revisit this topic with an application.

5.11 A Dual View

Let’s take a moment to recognize a dual view of linear systems. A coordinate system or linear combination approach (5.1) represents what we might call the “column view.” If instead we focus on the row equations (5.55.6) we take a “row view.” A great example of this can be found by revisiting two line intersection scenarios in Examples 3.9 and 3.10. In the former, we are intersecting two lines in parametric form, and the problem statement takes the column view by asking what linear combination of the column vectors results in the right- hand side. In the latter, we are intersecting two lines in implicit form, and the problem statement takes the row view by asking what u1 and u2 satisfy both line equations. Depending on the problem at hand, we can choose the view that best suits our given information.

We took a column view in our approach to presenting 2 × 2 linear systems, but equally valid would be a row view. Let’s look at the key examples from this chapter as if they were posed as implicit line intersection problems. Figure 5.5 illustrates linear systems from Example 5.1 (unique solution), the example in Section 5.6 (inconsistent linear system), and the example in Section 5.7 (underdetermined system). Importantly, the column and row views of the systems result in the same classification of the solution sets.

Figure 5.5

Figure showing linear system classification: Three linear systems interpreted as line intersection problems. Left to right: unique solution, inconsistent, underdetermined.

Linear system classification: Three linear systems interpreted as line intersection problems. Left to right: unique solution, inconsistent, underdetermined.

Figure 5.6 illustrates two types of homogeneous systems from examples in Section 5.8. Since the right-hand side of each line equation is zero, the lines will pass through the origin. This guarantees the trivial solution for both intersection problems. The system with nontrivial solutions is depicted on the right as two identical lines.

Figure 5.6

Figure showing homogeneous linear system classification: Two homogeneous linear systems interpreted as line intersection problems. Left to right: trivial solution only nontrivial solutions.

Homogeneous linear system classification: Two homogeneous linear systems interpreted as line intersection problems. Left to right: trivial solution only nontrivial solutions.

image

  • linear system
  • solution spaces
  • consistent linear system
  • Cramer’s rule
  • upper triangular
  • Gauss elimination
  • forward elimination
  • back substitution
  • linear combination
  • inverse matrix
  • orthogonal matrix
  • orthonormal
  • rigid body motion
  • inconsistent system of equations
  • underdetermined system of equations
  • homogeneous system
  • kernel
  • null space
  • row pivoting
  • column pivoting
  • complete pivoting
  • change of basis
  • column and row views of linear systems

5.12 Exercises

  1. Using the matrix form, write down the linear system to express

    [63]

    in terms of the local coordinate system defined by the origin,

    a1=[23],anda2=[60].

  2. Is the following linear system consistent? Why?

    [1200]u=[04].

  3. What are the three possibilities for the solution space of a linear system Au = b?
  4. Use Cramer’s rule to solve the system in Exercise 1.
  5. Use Cramer’s rule to solve the system

    [2101][x1x2]=[82].

  6. Give an example of an upper triangular matrix.
  7. Use Gauss elimination to solve the system in Exercise 1.
  8. Use Gauss elimination to solve the system

    [4221][x1x2]=[21].

  9. Use Gauss elimination with pivoting to solve the system

    [0422][x1x2]=[86].

  10. Resolve the system in Exercise 1 with Gauss elimination with pivoting.
  11. Give an example by means of a sketch of an unsolvable system. Do the same for an underdetermined system.
  12. Under what conditions can a nontrivial solution to a homogeneous system be found?
  13. Does the following homogeneous system have a nontrivial solution?

    [2204][x1x2][00].

  14. What is the kernel of the matrix

    C=[26412]?

  15. What is the null space of the matrix

    C=[2412]?

  16. Find the inverse of the matrix in Exercise 1.
  17. What is the inverse of the matrix

    [10000.5]?

  18. What is the inverse of the matrix

    [cos30°sin30°sin30°cos30°]?

  19. What type of matrix has the property that A−1 = AT? Give an example.
  20. What is the inverse of

    [1100]?

  21. Define the matrix A that maps

    [10][10]and[11][11].

  22. Define the matrix A that maps

    [20][11]and[04][11].

1 Gauss elimination and forward elimination are often used interchangeably.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.224.62.105