NumPy 1.25.0 Release Notes#
The NumPy 1.25.0 release continues the ongoing work to improve the handling and promotion of dtypes, increase the execution speed, and clarify the documentation. There has also been work to prepare for the future NumPy 2.0.0 release, resulting in a large number of new and expired deprecation. Highlights are:
Support for MUSL, there are now MUSL wheels.
Support the Fujitsu C/C++ compiler.
Object arrays are now supported in einsum
Support for inplace matrix multiplication (
We will be releasing a NumPy 1.26 when Python 3.12 comes out. That is needed because distutils has been dropped by Python 3.12 and we will be switching to using meson for future builds. The next mainline release will be NumPy 2.0.0. We plan that the 2.0 series will still support downstream projects built against earlier versions of NumPy.
The Python versions supported in this release are 3.9-3.11.
np.core.MachAris deprecated. It is private API. In names defined in
np.coreshould generally be considered private.
np.round_is deprecated. Use np.round instead.
np.productis deprecated. Use np.prod instead.
np.cumproductis deprecated. Use np.cumprod instead.
np.sometrueis deprecated. Use np.any instead.
np.alltrueis deprecated. Use np.all instead.
Only ndim-0 arrays are treated as scalars. NumPy used to treat all arrays of size 1 (e.g.,
np.array([3.14])) as scalars. In the future, this will be limited to arrays of ndim 0 (e.g.,
np.array(3.14)). The following expressions will report a deprecation warning:
a = np.array([3.14]) float(a) # better: a to get the numpy.float or a.item() b = np.array([[3.14]]) c = numpy.random.rand(10) c = b # better: c = b[0, 0]
np.find_common_typeis deprecated. numpy.find_common_type is now deprecated and its use should be replaced with either
numpy.promote_types. Most users leave the second
in which case
np.promote_typesare both faster and more robust. When not using
scalar_typesthe main difference is that the replacement intentionally converts non-native byte-order to native byte order. Further,
objectdtype rather than failing promotion. This leads to differences when the inputs are not all numeric. Importantly, this also happens for e.g. timedelta/datetime for which NumPy promotion rules are currently sometimes surprising.
scalar_typesargument is not
things are more complicated. In most cases, using
np.result_typeand passing the Python values
0jhas the same result as using
np.result_typeis the correct replacement and it may be passed scalar values like
np.float32(0.0). Passing values other than 0, may lead to value-inspecting behavior (which
np.find_common_typenever used and NEP 50 may change in the future). The main possible change in behavior in this case, is when the array types are signed integers and scalar types are unsigned.
If you are unsure about how to replace a use of
scalar_typesor when non-numeric dtypes are likely, please do not hesitate to open a NumPy issue to ask for help.
np.finfo.macharhave been removed.
+arrwill now raise an error when the dtype is not numeric (and positive is undefined).
A sequence must now be passed into the stacking family of functions (
np.clipnow defaults to same-kind casting. Falling back to unsafe casting was deprecated in NumPy 1.17.
np.clipwill now propagate
np.nanvalues passed as
max. Previously, a scalar NaN was usually ignored. This was deprecated in NumPy 1.17.
np.dualsubmodule has been removed.
NumPy now always ignores sequence behavior for an array-like (defining one of the array protocols). (Deprecation started NumPy 1.20)
FutureWarningwhen casting to a subarray dtype in
astypeor the array creation functions such as
asarrayis now finalized. The behavior is now always the same as if the subarray dtype was wrapped into a single field (which was the workaround, previously). (FutureWarning since NumPy 1.20)
!=warnings have been finalized. The
!=operators on arrays now always:
raise errors that occur during comparisons such as when the arrays have incompatible shapes (
np.array([1, 2]) == np.array([1, 2, 3])).
return an array of all
Falsewhen values are fundamentally not comparable (e.g. have different dtypes). An example is
np.array(["a"]) == np.array().
This mimics the Python behavior of returning
Truewhen comparing incompatible types like
"a" == 1and
"a" != 1. For a long time these gave
Nose support has been removed. NumPy switched to using pytest in 2018 and nose has been unmaintained for many years. We have kept NumPy’s nose support to avoid breaking downstream projects who might have been using it and not yet switched to pytest or some other testing framework. With the arrival of Python 3.12, unpatched nose will raise an error. It is time to move on.
These are not to be confused with pytest versions with similar names, e.g., pytest.mark.slow, pytest.mark.skipif, pytest.mark.parametrize.
numpy.testing.utilsshim has been removed. Importing from the
numpy.testing.utilsshim has been deprecated since 2019, the shim has now been removed. All imports should be made directly from
The environment variable to disable dispatching has been removed. Support for the
NUMPY_EXPERIMENTAL_ARRAY_FUNCTIONenvironment variable has been removed. This variable disabled dispatching with
y=as an alias of
out=has been removed. The
isneginffunctions allowed using
y=as a (deprecated) alias for
out=. This is no longer supported.
busday_countmethod now correctly handles cases where the
begindatesis later in time than the
enddates. Previously, the
enddateswas included, even though the documentation states it is always excluded.
When comparing datetimes and timedelta using
np.not_equalnumpy previously allowed the comparison with
casting="unsafe". This operation now fails. Forcing the output dtype using the
dtypekwarg can make the operation succeed, but we do not recommend it.
When loading data from a file handle using
np.load, if the handle is at the end of file, as can happen when reading multiple arrays by calling
np.loadrepeatedly, numpy previously raised
allow_pickle=True. Now it raises
EOFErrorinstead, in both cases.
mode=wrap pads with strict multiples of original data#
Code based on earlier version of
pad that uses
mode="wrap" will return
different results when the padding size is larger than initial array.
mode=wrap now always fills the space with
strict multiples of original data even if the padding size is larger than the
ulong_t were aliases for
and confusing (a remainder from of Python 2). This change may lead to the errors:
'long_t' is not a type identifier 'ulong_t' is not a type identifier
We recommend use of bit-sized types such as
cnp.int64_t or the use of
cnp.intp_t which is 32 bits on 32 bit systems and 64 bits on 64 bit
systems (this is most compatible with indexing).
long is desired, use plain
cnp.int_t is also
long (NumPy’s default integer). However,
is 32 bit on 64 bit windows and we may wish to adjust this even in NumPy.
(Please do not hesitate to contact NumPy developers if you are curious about this.)
Changed error message and type for bad
axes argument to
The error message and type when a wrong
axes value is passed to
ufunc(..., axes=[...])` has changed. The message is now more indicative of
the problem, and if the value is mismatched an
AxisError will be raised.
TypeError will still be raised for invalid input types.
Array-likes that define
__array_ufunc__ can now override ufuncs if used as
where keyword argument of a
numpy.ufunc is a subclass of
numpy.ndarray or is a duck type that defines
numpy.class.__array_ufunc__ it can override the behavior of the ufunc
using the same mechanism as the input and output arguments.
Note that for this to work properly, the
implementation will have to unwrap the
where argument to pass it into the
default implementation of the
ufunc or, for
subclasses before using
Compiling against the NumPy C API is now backwards compatible by default#
NumPy now defaults to exposing a backwards compatible subset of the C-API.
This makes the use of
Libraries can override the default minimal version to be compatible with
#define NPY_TARGET_VERSION NPY_1_22_API_VERSION
before including NumPy or by passing the equivalent
-D option to the
The NumPy 1.25 default is
NPY_1_19_API_VERSION. Because the NumPy 1.19
C API was identical to the NumPy 1.16 one resulting programs will be compatible
with NumPy 1.16 (from a C-API perspective).
This default will be increased in future non-bugfix releases.
You can still compile against an older NumPy version and run on a newer one.
For more details please see For downstream package authors.
np.einsum now accepts arrays with
The code path will call python operators on object dtype arrays, much
Add support for inplace matrix multiplication#
It is now possible to perform inplace matrix multiplication
>>> import numpy as np >>> a = np.arange(6).reshape(3, 2) >>> print(a) [[0 1] [2 3] [4 5]] >>> b = np.ones((2, 2), dtype=int) >>> a @= b >>> print(a) [[1 1] [5 5] [9 9]]
NPY_ENABLE_CPU_FEATURES environment variable#
Users may now choose to enable only a subset of the built CPU features at runtime by specifying the NPY_ENABLE_CPU_FEATURES environment variable. Note that these specified features must be outside the baseline, since those are always assumed. Errors will be raised if attempting to enable a feature that is either not supported by your CPU, or that NumPy was not built with.
NumPy now has an
NumPy now has a dedicated namespace making most exceptions and warnings available. All of these remain available in the main namespace, although some may be moved slowly in the future. The main reason for this is to increase discoverability and add future exceptions.
np.linalg functions return NamedTuples#
np.linalg functions that return tuples now return namedtuples. These
The return type is unchanged in instances where these functions return
non-tuples with certain keyword arguments (like
String functions in
np.char are compatible with NEP 42 custom dtypes#
Custom dtypes that represent unicode strings or byte strings can now be
passed to the string functions in
String dtype instances can be created from the string abstract dtype classes#
It is now possible to create a string dtype instance with a size without
using the string name of the dtype. For example,
will create a dtype that is equivalent to
np.dtype('U8'). This feature
is most useful when writing generic code dealing with string dtype
Fujitsu C/C++ compiler is now supported#
Support for Fujitsu compiler has been added. To build with Fujitsu compiler, run:
python setup.py build -c fujitsu
SSL2 is now supported#
Support for SSL2 has been added. SSL2 is a library that provides OpenBLAS compatible GEMM functions. To enable SSL2, it need to edit site.cfg and build with Fujitsu compiler. See site.cfg.example.
NDArrayOperatorsMixin specifies that it has no
NDArrayOperatorsMixin class now specifies that it contains no
__slots__, ensuring that subclasses can now make use of this feature in
Fix power of complex zero#
np.power now returns a different result for
for complex numbers. Note that the value is only defined when
the real part of the exponent is larger than zero.
Previously, NaN was returned unless the imaginary part was strictly
zero. The return value is either
NumPy now has a new
DTypePromotionError which is used when two
dtypes cannot be promoted to a common one, for example:
raises this new exception.
np.show_config uses information from Meson#
Build and system information now contains information from Meson.
np.show_config now has a new optional parameter
mode to help
customize the output.
np.ma.diff not preserving the mask when called with arguments prepend/append.#
np.ma.diff with arguments prepend and/or append now returns a
MaskedArray with the input mask preserved.
MaskedArray without the mask was returned.
Corrected error handling for NumPy C-API in Cython#
Many NumPy C functions defined for use in Cython were lacking the
correct error indicator like
except -1 or
These have now been added.
Ability to directly spawn random number generators#
numpy.random.Generator.spawn now allows to directly spawn new
independent child generators via the
numpy.random.BitGenerator.spawn does the same for the underlying
numpy.random.BitGenerator.seed_seq now gives direct
access to the seed sequence used for initializing the bit generator.
This allows for example:
seed = 0x2e09b90939db40c400f8f22dae617151 rng = np.random.default_rng(seed) child_rng1, child_rng2 = rng.spawn(2) # safely use rng, child_rng1, and child_rng2
Previously, this was hard to do without passing the
explicitly. Please see
numpy.random.SeedSequence for more information.
numpy.logspace now supports a non-scalar
base argument of
numpy.logspace can now be array-like if it is
broadcastable against the
np.ma.dot() now supports for non-2d arrays#
np.ma.dot() only worked if
b were both 2d.
Now it works for non-2d arrays as well as
Explicitly show keys of .npz file in repr#
NpzFile shows keys of loaded .npz file when printed.
>>> npzfile = np.load('arr.npz') >>> npzfile NpzFile 'arr.npz' with keys arr_0, arr_1, arr_2, arr_3, arr_4...
NumPy now exposes DType classes in
numpy.dtypes module now exposes DType classes and
will contain future dtype related functionality.
Most users should have no need to use these classes directly.
Drop dtype metadata before saving in .npy or .npz files#
*.npy file containing a table with a dtype with
metadata cannot be read back.
Now, np.save and np.savez drop metadata before saving.
numpy.lib.recfunctions.structured_to_unstructured returns views in more cases#
structured_to_unstructured now returns a view, if the stride between the
fields is constant. Prior, padding between the fields or a reversed field
would lead to a copy.
This change only applies to
recarray. For all
other array subclasses, the behavior remains unchanged.
Signed and unsigned integers always compare correctly#
int64 are mixed in NumPy, NumPy typically
promotes both to
float64. This behavior may be argued about
but is confusing for comparisons
<=, since the results
returned can be incorrect but the conversion is hidden since the
result is a boolean.
NumPy will now return the correct results for these by avoiding
the cast to float.
Performance improvements and changes#
np.argsort on AVX-512 enabled processors#
32-bit and 64-bit quicksort algorithm for np.argsort gain up to 6x speed up on processors that support AVX-512 instruction set.
Thanks to Intel corporation for sponsoring this work.
np.sort on AVX-512 enabled processors#
Quicksort for 16-bit and 64-bit dtypes gain up to 15x and 9x speed up on processors that support AVX-512 instruction set.
Thanks to Intel corporation for sponsoring this work.
__array_function__ machinery is now much faster#
The overhead of the majority of functions in NumPy is now smaller especially when keyword arguments are used. This change significantly speeds up many simple function calls.
ufunc.at can be much faster#
ufunc.at can be up to 9x faster. The conditions for this speedup:
operands are aligned
If ufuncs with appropriate indexed loops on 1d arguments with the above
ufunc.at can be up to 60x faster (an additional 7x speedup).
Appropriate indexed loops have been added to
The internal logic is similar to the logic used for regular ufuncs, which also have fast paths.
Thanks to the D. E. Shaw group for sponsoring this work.
Faster membership test on
Membership test on
NpzFile will no longer
decompress the archive if it is successful.
np.c_ with certain scalar values#
In rare cases, using mainly
np.r_ with scalars can lead to different
results. The main potential changes are highlighted by the following:
>>> np.r_[np.arange(5, dtype=np.uint8), -1].dtype int16 # rather than the default integer (int64 or int32) >>> np.r_[np.arange(5, dtype=np.int8), 255] array([ 0, 1, 2, 3, 4, 255], dtype=int16)
Where the second example returned:
array([ 0, 1, 2, 3, 4, -1], dtype=int8)
The first one is due to a signed integer scalar with an unsigned integer
array, while the second is due to
255 not fitting into
NumPy currently inspecting values to make this work.
(Note that the second example is expected to change in the future due to
NEP 50; it will then raise an error.)
Most NumPy functions are wrapped into a C-callable#
To speed up the
__array_function__ dispatching, most NumPy functions
are now wrapped into C-callables and are not proper Python functions or
They still look and feel the same as before (like a Python function), and this
should only improve performance and user experience (cleaner tracebacks).
However, please inform the NumPy developers if this change confuses your
program for some reason.
C++ standard library usage#
NumPy builds now depend on the C++ standard library, because
numpy.core._multiarray_umath extension is linked with
the C++ linker.