NumPy 1.20.0 Release Notes

New functions

numpy.broadcast_shapes is a new user-facing function

broadcast_shapes gets the resulting shape from broadcasting the given shape tuples against each other.

>>> np.broadcast_shapes((1, 2), (3, 1))
(3, 2)

>>> np.broadcast_shapes(2, (3, 1))
(3, 2)

>>> np.broadcast_shapes((6, 7), (5, 6, 1), (7,), (5, 1, 7))
(5, 6, 7)



Using the aliases of builtin types like is deprecated

For a long time, has been an alias of the builtin int. This is repeatedly a cause of confusion for newcomers, and is also simply not useful.

These aliases have been deprecated. The table below shows the full list of deprecated aliases, along with their exact meaning. Replacing uses of items in the first column with the contents of the second column will work identically and silence the deprecation warning.

In many cases, it may have been intended to use the types from the third column. Be aware that use of these types may result in subtle but desirable behavior changes.

Deprecated name

Identical to

Possibly intended numpy type





numpy.int_ (default int dtype), numpy.cint (C int)



numpy.float_, numpy.double (equivalent)



numpy.complex_, numpy.cdouble (equivalent)








int (long on Python 2)

numpy.int_ (C long), numpy.longlong (largest integer type)


str (unicode on Python 2)


Note that for technical reasons these deprecation warnings will only be emitted on Python 3.7 and above.


Passing shape=None to functions with a non-optional shape argument is deprecated

Previously, this was an alias for passing shape=(). This deprecation is emitted by PyArray_IntpConverter in the C API. If your API is intended to support passing None, then you should check for None prior to invoking the converter, so as to be able to distinguish None and ().


Indexing errors will be reported even when index result is empty

In the future, NumPy will raise an IndexError when an integer array index contains out of bound values even if a non-indexed dimension is of length 0. This will now emit a DeprecationWarning. This can happen when the array is previously empty, or an empty slice is involved:

arr1 = np.zeros((5, 0))
arr2 = np.zeros((5, 5))
arr2[[20], :0]

Previously the non-empty index [20] was not checked for correctness. It will now be checked causing a deprecation warning which will be turned into an error. This also applies to assignments.


Inexact matches for mode and searchside are deprecated

Inexact and case insensitive matches for mode and searchside were valid inputs earlier and will give a DeprecationWarning now. For example, below are some example usages which are now deprecated and will give a DeprecationWarning.

import numpy as np arr = np.array([[3, 6, 6], [4, 5, 1]]) # mode: inexact match np.ravel_multi_index(arr, (7, 6), mode=”clap”) # should be “clip” # searchside: inexact match np.searchsorted(arr[0], 4, side=’random’) # should be “right”


Deprecation of numpy.dual

The module numpy.dual is deprecated. Instead of importing functions from numpy.dual, the functions should be imported directly from NumPy or SciPy.


outer and ufunc.outer deprecated for matrix

np.matrix use with outer or generic ufunc outer calls such as numpy.add.outer. Previously, matrix was converted to an array here. This will not be done in the future requiring a manual conversion to arrays.


Further Numeric Style types Deprecated

The remaining numeric-style type codes Bytes0, Str0, Uint32, Uint64, and Datetime64 have been deprecated. The lower-case variants should be used instead. For bytes and string "S" and "U" are further alternatives.


The ndincr method of ndindex is deprecated

The documentation has warned against using this function since NumPy 1.8. Use next(it) instead of it.ndincr().


Arrays cannot be using subarray dtypes

Array creation and casting using np.array(obj, dtype) and arr.astype(dtype) will not support dtype to be a subarray dtype such as np.dtype("(2)i,").

For such a dtype the following behaviour occurs currently:

res = np.array(obj, dtype)

res.dtype is not dtype
res.dtype is dtype.base
res.shape[-dtype.ndim:] == dtype.shape

The shape of the dtype is included into the array. This leads to inconsistencies when obj is:

  • a scalar, such as np.array(1, dtype="(2)i")

  • an array, such as np.array(np.array([1]), dtype="(2)i")

In most cases the work-around is to pass the output dtype directly and possibly check res.shape[-dtype.ndim:] == dtype.shape. If this is insufficient, please open an issue on the NumPy issue tracker.


Expired deprecations

  • The deprecation of numeric style type-codes np.dtype("Complex64") (with upper case spelling), is expired. "Complex64" corresponded to "complex128" and "Complex32" corresponded to "complex64".

  • The deprecation of np.sctypeNA and np.typeNA is expired. Both have been removed from the public API. Use np.typeDict instead.


  • The 14-year deprecation of np.ctypeslib.ctypes_load_library is expired. Use load_library instead, which is identical.


Financial functions removed

In accordance with NEP 32, the financial functions are removed from NumPy 1.20. The functions that have been removed are fv, ipmt, irr, mirr, nper, npv, pmt, ppmt, pv, and rate. These functions are available in the numpy_financial library.


Compatibility notes

Same kind casting in concatenate with axis=None

When concatenate is called with axis=None, the flattened arrays were cast with unsafe. Any other axis choice uses “same kind”. That different default has been deprecated and “same kind” casting will be used instead. The new casting keyword argument can be used to retain the old behaviour.


NumPy Scalars are cast when assigned to arrays

When creating or assigning to arrays, in all relevant cases NumPy scalars will now be cast identically to NumPy arrays. In particular this changes the behaviour in some cases which previously raised an error:

np.array([np.float64(np.nan)], dtype=np.int64)

will succeed and return an undefined result (usually the smallest possible integer). This also affects assignments:

arr[0] = np.float64(np.nan)

At this time, NumPy retains the behaviour for:

np.array(np.float64(np.nan), dtype=np.int64)

The above changes do not affect Python scalars:

np.array([float(“NaN”)], dtype=np.int64)

remains unaffected (np.nan is a Python float, not a NumPy one). Unlike signed integers, unsigned integers do not retain this special case, since they always behaved more like casting. The following code stops raising an error:

np.array([np.float64(np.nan)], dtype=np.uint64)

To avoid backward compatibility issues, at this time assignment from datetime64 scalar to strings of too short length remains supported. This means that np.asarray(np.datetime64("2020-10-10"), dtype="S5") succeeds now, when it failed before. In the long term this may be deprecated or the unsafe cast may be allowed generally to make assignment of arrays and scalars behave consistently.

Array coercion changes when Strings and other types are mixed

When stringss and other types are mixed, such as:

np.array(["string", np.float64(3.)], dtype="S")

The results will change, which may lead to string dtypes with longer strings in some cases. In particularly, if dtype="S" is not provided any numerical value will lead to a string results long enough to hold all possible numerical values. (e.g. “S32” for floats). Note that you should always provide dtype="S" when converting non-strings to strings.

If dtype="S" is provided the results will be largely identical to before, but NumPy scalars (not a Python float like 1.0), will still enforce a uniform string length:

np.array([np.float64(3.)], dtype="S")  # gives "S32"
np.array([3.0], dtype="S")  # gives "S3"

while previously the first version gave the same result as the second.

Array coercion restructure

Array coercion has been restructured. In general, this should not affect users. In extremely rare corner cases where array-likes are nested:


things will now be more consistent with:


which could potentially change output subtly for badly defined array-likes. We are not aware of any such case where the results were not clearly incorrect previously.


Writing to the result of numpy.broadcast_arrays will export readonly buffers

In NumPy 1.17 numpy.broadcast_arrays started warning when the resulting array was written to. This warning was skipped when the array was used through the buffer interface (e.g. memoryview(arr)). The same thing will now occur for the two protocols __array_interface__, and __array_struct__ returning read-only buffers instead of giving a warning.


Numeric-style type names have been removed from type dictionaries

To stay in sync with the deprecation for np.dtype("Complex64") and other numeric-style (capital case) types. These were removed from np.sctypeDict and np.typeDict. You should use the lower case versions instead. Note that "Complex64" corresponds to "complex128" and "Complex32" corresponds to "complex64". The numpy style (new) versions, denote the full size and not the size of the real/imaginary part.


nickname attribute removed from ABCPolyBase

An abstract property nickname has been removed from ABCPolyBase as it was no longer used in the derived convenience classes. This may affect users who have derived classes from ABCPolyBase and overridden the methods for representation and display, e.g. __str__, __repr__, _repr_latex, etc.


float->timedelta and uint64->timedelta promotion will raise a TypeError

Float and timedelta promotion consistently raises a TypeError. np.promote_types("float32", "m8") aligns with np.promote_types("m8", "float32") now and both raise a TypeError. Previously, np.promote_types("float32", "m8") returned "m8" which was considered a bug.

Uint64 and timedelta promotion consistently raises a TypeError. np.promote_types("uint64", "m8") aligns with np.promote_types("m8", "uint64") now and both raise a TypeError. Previously, np.promote_types("uint64", "m8") returned "m8" which was considered a bug.


numpy.genfromtxt now correctly unpacks structured arrays

Previously, numpy.genfromtxt failed to unpack if it was called with unpack=True and a structured datatype was passed to the dtype argument (or dtype=None was passed and a structured datatype was inferred). For example:

>>> data = StringIO("21 58.0\n35 72.0")
>>> np.genfromtxt(data, dtype=None, unpack=True)
array([(21, 58.), (35, 72.)], dtype=[('f0', '<i8'), ('f1', '<f8')])

Structured arrays will now correctly unpack into a list of arrays, one for each column:

>>> np.genfromtxt(data, dtype=None, unpack=True)
[array([21, 35]), array([58., 72.])]


mgrid, r_, etc. fixed to consistently return correct outputs for non-default precision inputs

Previously, np.mgrid[np.float32(0.1):np.float32(0.35):np.float32(0.1),] and np.r_[0:10:np.complex64(3j)] failed to return meaningful output. This bug potentially affects mgrid, ogrid, r_, and c_ when an input with dtype other than the default float64 and complex128 and equivalent Python types were used. The methods have been fixed to handle varying precision correctly.


Boolean array indices with mismatching shapes now properly give IndexError

Previously, if a boolean array index matched the size of the indexed array but not the shape, it was incorrectly allowed in some cases. In other cases, it gave an error, but the error was incorrectly a ValueError with a message about broadcasting instead of the correct IndexError.

For example, the following used to incorrectly give ValueError: operands could not be broadcast together with shapes (2,2) (1,4):

np.empty((2, 2))[np.array([[True, False, False, False]])]

And the following used to incorrectly return array([], dtype=float64):

np.empty((2, 2))[np.array([[False, False, False, False]])]

Both now correctly give IndexError: boolean index did not match indexed array along dimension 0; dimension is 2 but corresponding boolean dimension is 1.


Casting errors interrupt Iteration

When iterating while casting values, an error may stop the iteration earlier than before. In any case, a failed casting operation always returned undefined, partial results. Those may now be even more undefined and partial. For users of the NpyIter C-API such cast errors will now cause the iternext() function to return 0 and thus abort iteration. Currently, there is no API to detect such an error directly. It is necessary to check PyErr_Occurred(), which may be problematic in combination with NpyIter_Reset. These issues always existed, but new API could be added if required by users.


f2py generated code may return unicode instead of byte strings

Some byte strings previously returned by f2py generated code may now be unicode strings. This results from the ongoing Python2 -> Python3 cleanup.


The first element of the __array_interface__["data"] tuple must be an integer

This has been the documented interface for many years, but there was still code that would accept a byte string representation of the pointer address. That code has been removed, passing the address as a byte string will now raise an error.


The numpy.i file for swig is Python 3 only.

Uses of Python 2.7 C-API functions have been updated to Python 3 only. Users who need the old version should take it from an older version of NumPy.


New Features

norm=backward, forward keyword options for numpy.fft functions

The keyword argument option norm=backward is added as an alias for None and acts as the default option; using it has the direct transforms unscaled and the inverse transforms scaled by 1/n.

Using the new keyword argument option norm=forward has the direct transforms scaled by 1/n and the inverse transforms unscaled (i.e. exactly opposite to the default option norm=backward).


NumPy is now typed

Type annotations have been added for large parts of NumPy. There is also a new numpy.typing module that contains useful types for end-users. The currently available types are

  • ArrayLike: for objects that can be coerced to an array

  • DtypeLike: for objects that can be coerced to a dtype


numpy.typing is accessible at runtime

The types in numpy.typing can now be imported at runtime. Code like the following will now work:

from numpy.typing import ArrayLike
x: ArrayLike = [1, 2, 3, 4]


Negation of user-defined BLAS/LAPACK detection order

distutils allows negation of libraries when determining BLAS/LAPACK libraries. This may be used to remove an item from the library resolution phase, i.e. to disallow NetLIB libraries one could do:

.. code:: bash

NPY_BLAS_ORDER=’^blas’ NPY_LAPACK_ORDER=’^lapack’ python build

which will use any of the accelerated libraries instead.


dtype option for cov and corrcoef

The dtype option is now available for numpy.cov and numpy.corrcoef. It specifies which data-type the returned result should have. By default the functions still return a numpy.float64 result.



Enable multi-platform SIMD compiler optimizations

A series of improvements for NumPy infrastructure to pave the way to NEP-38, that can be summarized as follow:

  • New Build Arguments :

    • --cpu-baseline to specify the minimal set of required optimizations, default value is min which provides the minimum CPU features that can safely run on a wide range of users platforms.

    • --cpu-dispatch to specify the dispatched set of additional optimizations, default value is max -xop -fma4 which enables all CPU features, except for AMD legacy features.

    • --disable-optimization to explicitly disable the whole new improvements, It also adds a new C compiler #definition called NPY_DISABLE_OPTIMIZATION which it can be used as guard for any SIMD code.

  • Advanced CPU dispatcher: A flexible cross-architecture CPU dispatcher built on the top of Python/Numpy distutils, support all common compilers with a wide range of CPU features.

    The new dispatcher requires a special file extension *.dispatch.c to mark the dispatch-able C sources. These sources have the ability to be compiled multiple times so that each compilation process represents certain CPU features and provides different #definitions and flags that affect the code paths.

  • New auto-generated C header ``core/src/common/_cpu_dispatch.h`` This header is generated by the distutils module ‘ccompiler_opt’, and contains all the #definitions and headers of instruction sets, that had been configured through command arguments ‘–cpu-baseline’ and ‘–cpu-dispatch’.

  • New C header ``core/src/common/npy_cpu_dispatch.h``

    This header contains all utilities that required for the whole CPU dispatching process, it also can be considered as a bridge linking the new infrastructure work with NumPy CPU runtime detection.

  • Add new attributes to NumPy umath module(Python level)

    • __cpu_baseline__ a list contains the minimal set of required optimizations that supported by the compiler and platform according to the specified values to command argument ‘–cpu-baseline’.

    • __cpu_dispatch__ a list contains the dispatched set of additional optimizations that supported by the compiler and platform according to the specified values to command argument ‘–cpu-dispatch’.

  • Print the supported CPU features during the run of PytestTester


Improved string representation for polynomials (__str__)

The string representation (__str__) of all six polynomial types in numpy.polynomial has been updated to give the polynomial as a mathematical expression instead of an array of coefficients. Two package-wide formats for the polynomial expressions are available - one using Unicode characters for superscripts and subscripts, and another using only ASCII characters.


Remove the Accelerate library as a candidate LAPACK library

Apple no longer supports Accelerate. Remove it.


Object arrays containing multi-line objects have a more readable repr

If elements of an object array have a repr containing new lines, then the wrapped lines will be aligned by column. Notably, this improves the repr of nested arrays:

>>> np.array([np.eye(2), np.eye(3)], dtype=object)
array([array([[1., 0.],
              [0., 1.]]),
       array([[1., 0., 0.],
              [0., 1., 0.],
              [0., 0., 1.]])], dtype=object)


Concatenate supports providing an output dtype

Support was added to concatenate to provide an output dtype and casting using keyword arguments. The dtype argument cannot be provided in conjunction with the out one.


Thread-safe f2py callback functions

Callback functions in f2py are now threadsafe.


numpy.core.records.fromfile now supports file-like objects

numpy.rec.fromfile can now use file-like objects, for instance io.BytesIO


Use f90 compiler specified in command line args

The compiler command selection for Fortran Portland Group Compiler is changed in numpy.distutils.fcompiler. This only affects the linking command. This forces the use of the executable provided by the command line option (if provided) instead of the pgfortran executable. If no executable is provided to the command line option it defaults to the pgf90 executable, wich is an alias for pgfortran according to the PGI documentation.


Add NumPy declarations for Cython 3.0 and later

The pxd declarations for Cython 3.0 were improved to avoid using deprecated NumPy C-API features. Extension modules built with Cython 3.0+ that use NumPy can now set the C macro NPY_NO_DEPRECATED_API=NPY_1_7_API_VERSION to avoid C compiler warnings about deprecated API usage.



np.linspace on integers now use floor

When using a int dtype in numpy.linspace, previously float values would be rounded towards zero. Now numpy.floor is used instead, which rounds toward -inf. This changes the results for negative values. For example, the following would previously give:

>>> np.linspace(-3, 1, 8, dtype=int)
array([-3, -2, -1, -1,  0,  0,  0,  1])

and now results in:

>>> np.linspace(-3, 1, 8, dtype=int)
array([-3, -3, -2, -2, -1, -1,  0,  1])

The former result can still be obtained with:

>>> np.linspace(-3, 1, 8).astype(int)
array([-3, -2, -1, -1,  0,  0,  0,  1])


NumPy 1.20.0 Release Notes