Typing (numpy.typing)#

New in version 1.20.

Large parts of the NumPy API have PEP 484-style type annotations. In addition a number of type aliases are available to users, most prominently the two below:

  • ArrayLike: objects that can be converted to arrays

  • DTypeLike: objects that can be converted to dtypes

Mypy plugin#

New in version 1.21.

A mypy plugin for managing a number of platform-specific annotations. Its functionality can be split into three distinct parts:

  • Assigning the (platform-dependent) precisions of certain number subclasses, including the likes of int_, intp and longlong. See the documentation on scalar types for a comprehensive overview of the affected classes. Without the plugin the precision of all relevant classes will be inferred as Any.

  • Removing all extended-precision number subclasses that are unavailable for the platform in question. Most notably this includes the likes of float128 and complex256. Without the plugin all extended-precision types will, as far as mypy is concerned, be available to all platforms.

  • Assigning the (platform-dependent) precision of c_intp. Without the plugin the type will default to ctypes.c_int64.

    New in version 1.22.

Examples#

To enable the plugin, one must add it to their mypy configuration file:

[mypy]
plugins = numpy.typing.mypy_plugin

Differences from the runtime NumPy API#

NumPy is very flexible. Trying to describe the full range of possibilities statically would result in types that are not very helpful. For that reason, the typed NumPy API is often stricter than the runtime NumPy API. This section describes some notable differences.

ArrayLike#

The ArrayLike type tries to avoid creating object arrays. For example,

>>> np.array(x**2 for x in range(10))
array(<generator object <genexpr> at ...>, dtype=object)

is valid NumPy code which will create a 0-dimensional object array. Type checkers will complain about the above example when using the NumPy types however. If you really intended to do the above, then you can either use a # type: ignore comment:

>>> np.array(x**2 for x in range(10))  # type: ignore

or explicitly type the array like object as Any:

>>> from typing import Any
>>> array_like: Any = (x**2 for x in range(10))
>>> np.array(array_like)
array(<generator object <genexpr> at ...>, dtype=object)

ndarray#

It’s possible to mutate the dtype of an array at runtime. For example, the following code is valid:

>>> x = np.array([1, 2])
>>> x.dtype = np.bool

This sort of mutation is not allowed by the types. Users who want to write statically typed code should instead use the numpy.ndarray.view method to create a view of the array with a different dtype.

DTypeLike#

The DTypeLike type tries to avoid creation of dtype objects using dictionary of fields like below:

>>> x = np.dtype({"field1": (float, 1), "field2": (int, 3)})

Although this is valid NumPy code, the type checker will complain about it, since its usage is discouraged. Please see : Data type objects

Number precision#

The precision of numpy.number subclasses is treated as a invariant generic parameter (see NBitBase), simplifying the annotating of processes involving precision-based casting.

>>> from typing import TypeVar
>>> import numpy as np
>>> import numpy.typing as npt

>>> T = TypeVar("T", bound=npt.NBitBase)
>>> def func(a: "np.floating[T]", b: "np.floating[T]") -> "np.floating[T]":
...     ...

Consequently, the likes of float16, float32 and float64 are still sub-types of floating, but, contrary to runtime, they’re not necessarily considered as sub-classes.

Timedelta64#

The timedelta64 class is not considered a subclass of signedinteger, the former only inheriting from generic while static type checking.

0D arrays#

During runtime numpy aggressively casts any passed 0D arrays into their corresponding generic instance. Until the introduction of shape typing (see PEP 646) it is unfortunately not possible to make the necessary distinction between 0D and >0D arrays. While thus not strictly correct, all operations are that can potentially perform a 0D-array -> scalar cast are currently annotated as exclusively returning an ndarray.

If it is known in advance that an operation will perform a 0D-array -> scalar cast, then one can consider manually remedying the situation with either typing.cast or a # type: ignore comment.

Record array dtypes#

The dtype of numpy.recarray, and the Creating record arrays functions in general, can be specified in one of two ways:

  • Directly via the dtype argument.

  • With up to five helper arguments that operate via numpy.rec.format_parser: formats, names, titles, aligned and byteorder.

These two approaches are currently typed as being mutually exclusive, i.e. if dtype is specified than one may not specify formats. While this mutual exclusivity is not (strictly) enforced during runtime, combining both dtype specifiers can lead to unexpected or even downright buggy behavior.

API#

numpy.typing.ArrayLike = typing.Union[...]#

A Union representing objects that can be coerced into an ndarray.

Among others this includes the likes of:

  • Scalars.

  • (Nested) sequences.

  • Objects implementing the __array__ protocol.

New in version 1.20.

See Also

array_like:

Any scalar or sequence that can be interpreted as an ndarray.

Examples

>>> import numpy as np
>>> import numpy.typing as npt

>>> def as_array(a: npt.ArrayLike) -> np.ndarray:
...     return np.array(a)
numpy.typing.DTypeLike = typing.Union[...]#

A Union representing objects that can be coerced into a dtype.

Among others this includes the likes of:

  • type objects.

  • Character codes or the names of type objects.

  • Objects with the .dtype attribute.

New in version 1.20.

See Also

Specifying and constructing data types

A comprehensive overview of all objects that can be coerced into data types.

Examples

>>> import numpy as np
>>> import numpy.typing as npt

>>> def as_dtype(d: npt.DTypeLike) -> np.dtype:
...     return np.dtype(d)
numpy.typing.NDArray = numpy.ndarray[tuple[int, ...], numpy.dtype[+_ScalarType_co]][source]#

A np.ndarray[tuple[int, ...], np.dtype[+ScalarType]] type alias generic w.r.t. its dtype.type.

Can be used during runtime for typing arrays with a given dtype and unspecified shape.

New in version 1.21.

Examples

>>> import numpy as np
>>> import numpy.typing as npt

>>> print(npt.NDArray)
numpy.ndarray[tuple[int, ...], numpy.dtype[+_ScalarType_co]]

>>> print(npt.NDArray[np.float64])
numpy.ndarray[tuple[int, ...], numpy.dtype[numpy.float64]]

>>> NDArrayInt = npt.NDArray[np.int_]
>>> a: NDArrayInt = np.arange(10)

>>> def func(a: npt.ArrayLike) -> npt.NDArray[Any]:
...     return np.array(a)
class numpy.typing.NBitBase[source]#

A type representing numpy.number precision during static type checking.

Used exclusively for the purpose static type checking, NBitBase represents the base of a hierarchical set of subclasses. Each subsequent subclass is herein used for representing a lower level of precision, e.g. 64Bit > 32Bit > 16Bit.

New in version 1.20.

Examples

Below is a typical usage example: NBitBase is herein used for annotating a function that takes a float and integer of arbitrary precision as arguments and returns a new float of whichever precision is largest (e.g. np.float16 + np.int64 -> np.float64).

>>> from __future__ import annotations
>>> from typing import TypeVar, TYPE_CHECKING
>>> import numpy as np
>>> import numpy.typing as npt

>>> S = TypeVar("S", bound=npt.NBitBase)
>>> T = TypeVar("T", bound=npt.NBitBase)

>>> def add(a: np.floating[S], b: np.integer[T]) -> np.floating[S | T]:
...     return a + b

>>> a = np.float16()
>>> b = np.int64()
>>> out = add(a, b)

>>> if TYPE_CHECKING:
...     reveal_locals()
...     # note: Revealed local types are:
...     # note:     a: numpy.floating[numpy.typing._16Bit*]
...     # note:     b: numpy.signedinteger[numpy.typing._64Bit*]
...     # note:     out: numpy.floating[numpy.typing._64Bit*]