Linear algebra (numpy.linalg
)#
The NumPy linear algebra functions rely on BLAS and LAPACK to provide efficient low level implementations of standard linear algebra algorithms. Those libraries may be provided by NumPy itself using C versions of a subset of their reference implementations but, when possible, highly optimized libraries that take advantage of specialized processor functionality are preferred. Examples of such libraries are OpenBLAS, MKL (TM), and ATLAS. Because those libraries are multithreaded and processor dependent, environmental variables and external packages such as threadpoolctl may be needed to control the number of threads or specify the processor architecture.
The SciPy library also contains a linalg
submodule, and there is
overlap in the functionality provided by the SciPy and NumPy submodules. SciPy
contains functions not found in numpy.linalg
, such as functions related to
LU decomposition and the Schur decomposition, multiple ways of calculating the
pseudoinverse, and matrix transcendentals such as the matrix logarithm. Some
functions that exist in both have augmented functionality in scipy.linalg
.
For example, scipy.linalg.eig
can take a second matrix argument for solving
generalized eigenvalue problems. Some functions in NumPy, however, have more
flexible broadcasting options. For example, numpy.linalg.solve
can handle
“stacked” arrays, while scipy.linalg.solve
accepts only a single square
array as its first argument.
Note
The term matrix as it is used on this page indicates a 2d numpy.array
object, and not a numpy.matrix
object. The latter is no longer
recommended, even for linear algebra. See
the matrix object documentation for
more information.
The @
operator#
Introduced in NumPy 1.10.0, the @
operator is preferable to
other methods when computing the matrix product between 2d arrays. The
numpy.matmul
function implements the @
operator.
Matrix and vector products#
|
Dot product of two arrays. |
|
Compute the dot product of two or more arrays in a single function call, while automatically selecting the fastest evaluation order. |
|
Return the dot product of two vectors. |
|
Vector dot product of two arrays. |
|
Computes the vector dot product. |
|
Inner product of two arrays. |
|
Compute the outer product of two vectors. |
|
Matrix product of two arrays. |
|
Computes the matrix product. |
|
Matrix-vector dot product of two arrays. |
|
Vector-matrix dot product of two arrays. |
|
Compute tensor dot product along specified axes. |
|
Compute tensor dot product along specified axes. |
|
Evaluates the Einstein summation convention on the operands. |
|
Evaluates the lowest cost contraction order for an einsum expression by considering the creation of intermediate arrays. |
|
Raise a square matrix to the (integer) power n. |
|
Kronecker product of two arrays. |
|
Returns the cross product of 3-element vectors. |
Decompositions#
|
Cholesky decomposition. |
|
Compute the outer product of two vectors. |
|
Compute the qr factorization of a matrix. |
|
Singular Value Decomposition. |
|
Returns the singular values of a matrix (or a stack of matrices) |
Matrix eigenvalues#
|
Compute the eigenvalues and right eigenvectors of a square array. |
|
Return the eigenvalues and eigenvectors of a complex Hermitian (conjugate symmetric) or a real symmetric matrix. |
Compute the eigenvalues of a general matrix. |
|
|
Compute the eigenvalues of a complex Hermitian or real symmetric matrix. |
Norms and other numbers#
|
Matrix or vector norm. |
|
Computes the matrix norm of a matrix (or a stack of matrices) |
|
Computes the vector norm of a vector (or batch of vectors) |
|
Compute the condition number of a matrix. |
|
Compute the determinant of an array. |
|
Return matrix rank of array using SVD method |
Compute the sign and (natural) logarithm of the determinant of an array. |
|
|
Return the sum along diagonals of the array. |
|
Returns the sum along the specified diagonals of a matrix (or a stack of matrices) |
Solving equations and inverting matrices#
|
Solve a linear matrix equation, or system of linear scalar equations. |
|
Solve the tensor equation |
|
Return the least-squares solution to a linear matrix equation. |
|
Compute the inverse of a matrix. |
|
Compute the (Moore-Penrose) pseudo-inverse of a matrix. |
|
Compute the 'inverse' of an N-dimensional array. |
Other matrix operations#
|
Return specified diagonals. |
|
Returns specified diagonals of a matrix (or a stack of matrices) |
|
Transposes a matrix (or a stack of matrices) |
Exceptions#
Generic Python-exception-derived object raised by linalg functions. |
Linear algebra on several matrices at once#
Several of the linear algebra routines listed above are able to compute results for several matrices at once, if they are stacked into the same array.
This is indicated in the documentation via input parameter
specifications such as a : (..., M, M) array_like
. This means that
if for instance given an input array a.shape == (N, M, M)
, it is
interpreted as a “stack” of N matrices, each of size M-by-M. Similar
specification applies to return values, for instance the determinant
has det : (...)
and will in this case return an array of shape
det(a).shape == (N,)
. This generalizes to linear algebra
operations on higher-dimensional arrays: the last 1 or 2 dimensions of
a multidimensional array are interpreted as vectors or matrices, as
appropriate for each operation.