numpy.typing
Warning
Some of the types in this module rely on features only present in the standard library in Python 3.8 and greater. If you want to use these types in earlier versions of Python, you should install the typing-extensions package.
Large parts of the NumPy API have PEP-484-style type annotations. In addition a number of type aliases are available to users, most prominently the two below:
ArrayLike: objects that can be converted to arrays
ArrayLike
DTypeLike: objects that can be converted to dtypes
DTypeLike
NumPy is very flexible. Trying to describe the full range of possibilities statically would result in types that are not very helpful. For that reason, the typed NumPy API is often stricter than the runtime NumPy API. This section describes some notable differences.
The ArrayLike type tries to avoid creating object arrays. For example,
>>> np.array(x**2 for x in range(10)) array(<generator object <genexpr> at ...>, dtype=object)
is valid NumPy code which will create a 0-dimensional object array. Type checkers will complain about the above example when using the NumPy types however. If you really intended to do the above, then you can either use a # type: ignore comment:
# type: ignore
>>> np.array(x**2 for x in range(10)) # type: ignore
or explicitly type the array like object as Any:
Any
>>> from typing import Any >>> array_like: Any = (x**2 for x in range(10)) >>> np.array(array_like) array(<generator object <genexpr> at ...>, dtype=object)
It’s possible to mutate the dtype of an array at runtime. For example, the following code is valid:
>>> x = np.array([1, 2]) >>> x.dtype = np.bool_
This sort of mutation is not allowed by the types. Users who want to write statically typed code should insted use the numpy.ndarray.view method to create a view of the array with a different dtype.
numpy.ndarray.view
The DTypeLike type tries to avoid creation of dtype objects using dictionary of fields like below:
>>> x = np.dtype({"field1": (float, 1), "field2": (int, 3)})
Although this is valid Numpy code, the type checker will complain about it, since its usage is discouraged. Please see : Data type objects
The precision of numpy.number subclasses is treated as a covariant generic parameter (see NBitBase), simplifying the annoting of proccesses involving precision-based casting.
numpy.number
NBitBase
>>> from typing import TypeVar >>> import numpy as np >>> import numpy.typing as npt >>> T = TypeVar("T", bound=npt.NBitBase) >>> def func(a: "np.floating[T]", b: "np.floating[T]") -> "np.floating[T]": ... ...
Consequently, the likes of float16, float32 and float64 are still sub-types of floating, but, contrary to runtime, they’re not necessarily considered as sub-classes.
float16
float32
float64
floating
The timedelta64 class is not considered a subclass of signedinteger, the former only inheriting from generic while static type checking.
timedelta64
signedinteger
generic
numpy.typing.
A Union representing objects that can be coerced into an ndarray.
Union
ndarray
Among others this includes the likes of:
Scalars.
(Nested) sequences.
Objects implementing the __array__ protocol.
See Also
Any scalar or sequence that can be interpreted as an ndarray.
Examples
>>> import numpy as np >>> import numpy.typing as npt >>> def as_array(a: npt.ArrayLike) -> np.ndarray: ... return np.array(a)
A Union representing objects that can be coerced into a dtype.
dtype
type objects.
type
Character codes or the names of type objects.
Objects with the .dtype attribute.
.dtype
A comprehensive overview of all objects that can be coerced into data types.
>>> import numpy as np >>> import numpy.typing as npt >>> def as_dtype(d: npt.DTypeLike) -> np.dtype: ... return np.dtype(d)
An object representing numpy.number precision during static type checking.
Used exclusively for the purpose static type checking, NBitBase represents the base of a hierachieral set of subclasses. Each subsequent subclass is herein used for representing a lower level of precision, e.g. 64Bit > 32Bit > 16Bit.
64Bit > 32Bit > 16Bit
Below is a typical usage example: NBitBase is herein used for annotating a function that takes a float and integer of arbitrary precision as arguments and returns a new float of whichever precision is largest (e.g. np.float16 + np.int64 -> np.float64).
np.float16 + np.int64 -> np.float64
>>> from typing import TypeVar, TYPE_CHECKING >>> import numpy as np >>> import numpy.typing as npt >>> T = TypeVar("T", bound=npt.NBitBase) >>> def add(a: "np.floating[T]", b: "np.integer[T]") -> "np.floating[T]": ... return a + b >>> a = np.float16() >>> b = np.int64() >>> out = add(a, b) >>> if TYPE_CHECKING: ... reveal_locals() ... # note: Revealed local types are: ... # note: a: numpy.floating[numpy.typing._16Bit*] ... # note: b: numpy.signedinteger[numpy.typing._64Bit*] ... # note: out: numpy.floating[numpy.typing._64Bit*]