Before we get into linear algebra class in NumPy, there are five vector products we will cover at the beginning of this section. Let's review them one by one, starting with the numpy.dot()
product:
In [26]: x = np.array([[1, 2], [3, 4]]) In [27]: y = np.array([[10, 20], [30, 40]]) In [28]: np.dot(x, y) Out[28]: array([[ 70, 100], [150, 220]])
The numpy.dot()
function performs matrix multiplication, and the detailed calculation is shown here:
numpy.vdot()
handles multi-dimensional arrays differently than numpy.dot()
. It does not perform a matrix product, but flattens input arguments to one-dimensional vectors first:
In [29]: np.vdot(x, y) Out[29]: 300
The detailed calculation of numpy.vdot()
is as follows:
The numpy.outer()
function is the outer product of two vectors. It flattens the input arrays if they are not one-dimensional. Let's say that the flattened input vector A has shape (M, )
and the flattened input vector B has shape (N, )
. Then the result shape would be (M, N)
:
In [100]: np.outer(x,y) Out[100]: array([[ 10, 20, 30, 40], [ 20, 40, 60, 80], [ 30, 60, 90, 120], [ 40, 80, 120, 160]])
The detailed calculation of numpy.outer()
is as follows:
The last one is the numpy.cross()
product, a binary operation of two vectors (and it can only work for vectors) in three-dimensional space, the result of which is a vector perpendicular to both input data (a
,b
). If you are not familiar with the outer product, please refer to https://en.wikipedia.org/wiki/Cross_product. The following example shows that a
and b
are arrays of vectors, and the cross product of (a
,b
) and (b
,a
):
In [31]: a = np.array([1,0,0]) In [32]: b = np.array([0,1,0]) In [33]: np.cross(a,b) Out[33]: array([0, 0, 1]) In [34]: np.cross(b,a) Out[34]: array([ 0, 0, -1])
A detailed calculation is shown in the following graph, and the cross-product of two vectors a
and b
is denoted by a x b:
The previous functions are provided by NumPy for standard vector routines. Now we are going to talk about the key topic of this section: the numpy.linalg
sub-modules for linear algebra. Using the NumPy ndarray
with numpy.linalg
would be better than using numpy.matrix()
.
In the following examples, we will go through the rest of the basic operations of numpy.linalg
and use them to solve the linear equation in the matrix section:
In [35]: x = np.array([[4,8],[7,9]]) In [36]: np.linalg.det(x) Out[36]: -20.000000000000007
The previous example computes the determinant of a square array. Of course we can use numpy.linalg.inv()
to compute the inverse of an array, just as we use numpy.matrix.I
:
In [37]: np.linalg.inv(x) Out[37]: array([[-0.45, 0.4 ], [ 0.35, -0.2 ]]) In [38]: np.mat(x).I Out[38]: matrix([[-0.45, 0.4 ], [ 0.35, -0.2 ]])
From the previous example, we can see that numpy.linalg.inv()
provides an identical result to numpy.matrix.I
. The only difference is that numpy.linalg
returns ndarray
. Next, we will go back to the linear equation A x = b again, to see how we can use numpy.linalg.solve()
to achieve the same result as using the matrix object:
In [39]: x = np.linalg.solve(A,b) In [40]: x Out[40]: matrix([[ 0.2667], [ 0.4667], [-0.0667]])
numpy.linalg.solve(A,b)
computes the solution for x
, where the first input parameter (A
) stands for the coefficient array and the second parameter (b
) stands for the coordinate or dependent variable values. The numpy.linalg.solve()
function honored the input data type. In the example, we use matrices as input, so the output also returns a matrix x
. We can also use the ndarray
as our inputs.
When doing linear algebra with NumPy, it's better to use only one data type, either ndarray
or matrix
. It's not recommended to have a mixed type in the calculation. One reason is to reduce the conversion between different data types; the other reason is to avoid unexpected errors in the computation with two types. Since ndarray
has fewer restrictions on data dimensions and can perform all matrix-like operations, using ndarray
with numpy.linalg
, is preferred over matrix
.
18.119.138.202