NumPy 1.24 Release Notes#
The NumPy 1.24.0 release continues the ongoing work to improve the handling and promotion of dtypes, increase the execution speed, and clarify the documentation. There are also a large number of new and expired deprecations due to changes in promotion and cleanups. This might be called a deprecation release. Highlights are
Many new deprecations, check them out.
Many expired deprecations,
New F2PY features and fixes.
New “dtype” and “casting” keywords for stacking functions.
See below for the details,
This release supports Python versions 3.8-3.11.
Deprecations#
Deprecate fastCopyAndTranspose and PyArray_CopyAndTranspose#
The numpy.fastCopyAndTranspose
function has been deprecated. Use the
corresponding copy and transpose methods directly:
arr.T.copy()
The underlying C function PyArray_CopyAndTranspose
has also been deprecated
from the NumPy C-API.
(gh-22313)
Conversion of out-of-bound Python integers#
Attempting a conversion from a Python integer to a NumPy value will now always
check whether the result can be represented by NumPy. This means the following
examples will fail in the future and give a DeprecationWarning
now:
np.uint8(-1)
np.array([3000], dtype=np.int8)
Many of these did succeed before. Such code was mainly useful for unsigned
integers with negative values such as np.uint8(-1)
giving
np.iinfo(np.uint8).max
.
Note that conversion between NumPy integers is unaffected, so that
np.array(-1).astype(np.uint8)
continues to work and use C integer overflow
logic. For negative values, it will also work to view the array:
np.array(-1, dtype=np.int8).view(np.uint8)
.
In some cases, using np.iinfo(np.uint8).max
or val % 2**8
may also
work well.
In rare cases input data may mix both negative values and very large unsigned
values (i.e. -1
and 2**63
). There it is unfortunately necessary
to use %
on the Python value or use signed or unsigned conversion
depending on whether negative values are expected.
(gh-22385)
Deprecate msort
#
The numpy.msort
function is deprecated. Use np.sort(a, axis=0)
instead.
(gh-22456)
np.str0
and similar are now deprecated#
The scalar type aliases ending in a 0 bit size: np.object0
, np.str0
,
np.bytes0
, np.void0
, np.int0
, np.uint0
as well as np.bool8
are now deprecated and will eventually be removed.
(gh-22607)
Expired deprecations#
The
normed
keyword argument has been removed from np.histogram, np.histogram2d, and np.histogramdd. Usedensity
instead. Ifnormed
was passed by position,density
is now used.(gh-21645)
Ragged array creation will now always raise a
ValueError
unlessdtype=object
is passed. This includes very deeply nested sequences.(gh-22004)
Support for Visual Studio 2015 and earlier has been removed.
Support for the Windows Interix POSIX interop layer has been removed.
(gh-22139)
Support for Cygwin < 3.3 has been removed.
(gh-22159)
The mini() method of
np.ma.MaskedArray
has been removed. Use eithernp.ma.MaskedArray.min()
ornp.ma.minimum.reduce()
.The single-argument form of
np.ma.minimum
andnp.ma.maximum
has been removed. Usenp.ma.minimum.reduce()
ornp.ma.maximum.reduce()
instead.(gh-22228)
Passing dtype instances other than the canonical (mainly native byte-order) ones to
dtype=
orsignature=
in ufuncs will now raise aTypeError
. We recommend passing the strings"int8"
or scalar typesnp.int8
since the byte-order, datetime/timedelta unit, etc. are never enforced. (Initially deprecated in NumPy 1.21.)(gh-22540)
The
dtype=
argument to comparison ufuncs is now applied correctly. That means that onlybool
andobject
are valid values anddtype=object
is enforced.(gh-22541)
The deprecation for the aliases
np.object
,np.bool
,np.float
,np.complex
,np.str
, andnp.int
is expired (introduces NumPy 1.20). Some of these will now give a FutureWarning in addition to raising an error since they will be mapped to the NumPy scalars in the future.(gh-22607)
Compatibility notes#
array.fill(scalar)
may behave slightly different#
numpy.ndarray.fill
may in some cases behave slightly different now due to
the fact that the logic is aligned with item assignment:
arr = np.array([1]) # with any dtype/value
arr.fill(scalar)
# is now identical to:
arr[0] = scalar
Previously casting may have produced slightly different answers when using
values that could not be represented in the target dtype
or when the target
had object
dtype.
(gh-20924)
Subarray to object cast now copies#
Casting a dtype that includes a subarray to an object will now ensure a copy of the subarray. Previously an unsafe view was returned:
arr = np.ones(3, dtype=[("f", "i", 3)])
subarray_fields = arr.astype(object)[0]
subarray = subarray_fields[0] # "f" field
np.may_share_memory(subarray, arr)
Is now always false. While previously it was true for the specific cast.
(gh-21925)
Returned arrays respect uniqueness of dtype kwarg objects#
When the dtype
keyword argument is used with array
or
asarray
, the dtype of the returned array now always exactly
matches the dtype provided by the caller.
In some cases this change means that a view rather than the input array is
returned. The following is an example for this on 64bit Linux where long
and longlong
are the same precision but different dtypes
:
>>> arr = np.array([1, 2, 3], dtype="long")
>>> new_dtype = np.dtype("longlong")
>>> new = np.asarray(arr, dtype=new_dtype)
>>> new.dtype is new_dtype
True
>>> new is arr
False
Before the change, the dtype
did not match because new is arr
was
True
.
(gh-21995)
DLPack export raises BufferError
#
When an array buffer cannot be exported via DLPack a BufferError
is now
always raised where previously TypeError
or RuntimeError
was raised.
This allows falling back to the buffer protocol or __array_interface__
when
DLPack was tried first.
(gh-22542)
NumPy builds are no longer tested on GCC-6#
Ubuntu 18.04 is deprecated for GitHub actions and GCC-6 is not available on Ubuntu 20.04, so builds using that compiler are no longer tested. We still test builds using GCC-7 and GCC-8.
(gh-22598)
New Features#
New attribute symbol
added to polynomial classes#
The polynomial classes in the numpy.polynomial
package have a new
symbol
attribute which is used to represent the indeterminate of the
polynomial. This can be used to change the value of the variable when
printing:
>>> P_y = np.polynomial.Polynomial([1, 0, -1], symbol="y")
>>> print(P_y)
1.0 + 0.0·y¹ - 1.0·y²
Note that the polynomial classes only support 1D polynomials, so operations that involve polynomials with different symbols are disallowed when the result would be multivariate:
>>> P = np.polynomial.Polynomial([1, -1]) # default symbol is "x"
>>> P_z = np.polynomial.Polynomial([1, 1], symbol="z")
>>> P * P_z
Traceback (most recent call last)
...
ValueError: Polynomial symbols differ
The symbol can be any valid Python identifier. The default is symbol=x
,
consistent with existing behavior.
(gh-16154)
F2PY support for Fortran character
strings#
F2PY now supports wrapping Fortran functions with:
character (e.g.
character x
)character array (e.g.
character, dimension(n) :: x
)character string (e.g.
character(len=10) x
)and character string array (e.g.
character(len=10), dimension(n, m) :: x
)
arguments, including passing Python unicode strings as Fortran character string arguments.
(gh-19388)
New function np.show_runtime
#
A new function numpy.show_runtime
has been added to display the runtime
information of the machine in addition to numpy.show_config
which displays
the build-related information.
(gh-21468)
strict
option for testing.assert_array_equal
#
The strict
option is now available for testing.assert_array_equal
.
Setting strict=True
will disable the broadcasting behaviour for scalars and
ensure that input arrays have the same data type.
(gh-21595)
New parameter equal_nan
added to np.unique
#
np.unique
was changed in 1.21 to treat all NaN
values as equal and
return a single NaN
. Setting equal_nan=False
will restore pre-1.21
behavior to treat NaNs
as unique. Defaults to True
.
(gh-21623)
casting
and dtype
keyword arguments for numpy.stack
#
The casting
and dtype
keyword arguments are now available for
numpy.stack
. To use them, write np.stack(..., dtype=None,
casting='same_kind')
.
casting
and dtype
keyword arguments for numpy.vstack
#
The casting
and dtype
keyword arguments are now available for
numpy.vstack
. To use them, write np.vstack(..., dtype=None,
casting='same_kind')
.
casting
and dtype
keyword arguments for numpy.hstack
#
The casting
and dtype
keyword arguments are now available for
numpy.hstack
. To use them, write np.hstack(..., dtype=None,
casting='same_kind')
.
(gh-21627)
The bit generator underlying the singleton RandomState can be changed#
The singleton RandomState
instance exposed in the numpy.random
module
is initialized at startup with the MT19937
bit generator. The new function
set_bit_generator
allows the default bit generator to be replaced with a
user-provided bit generator. This function has been introduced to provide a
method allowing seamless integration of a high-quality, modern bit generator in
new code with existing code that makes use of the singleton-provided random
variate generating functions. The companion function get_bit_generator
returns the current bit generator being used by the singleton RandomState
.
This is provided to simplify restoring the original source of randomness if
required.
The preferred method to generate reproducible random numbers is to use a modern
bit generator in an instance of Generator
. The function default_rng
simplifies instantiation:
>>> rg = np.random.default_rng(3728973198)
>>> rg.random()
The same bit generator can then be shared with the singleton instance so that
calling functions in the random
module will use the same bit generator:
>>> orig_bit_gen = np.random.get_bit_generator()
>>> np.random.set_bit_generator(rg.bit_generator)
>>> np.random.normal()
The swap is permanent (until reversed) and so any call to functions in the
random
module will use the new bit generator. The original can be restored
if required for code to run correctly:
>>> np.random.set_bit_generator(orig_bit_gen)
(gh-21976)
np.void
now has a dtype
argument#
NumPy now allows constructing structured void scalars directly by
passing the dtype
argument to np.void
.
(gh-22316)
Improvements#
F2PY Improvements#
The generated extension modules don’t use the deprecated NumPy-C API anymore
Improved
f2py
generated exception messagesNumerous bug and
flake8
warning fixesvarious CPP macros that one can use within C-expressions of signature files are prefixed with
f2py_
. For example, one should usef2py_len(x)
instead oflen(x)
A new construct
character(f2py_len=...)
is introduced to support returning assumed length character strings (e.g.character(len=*)
) from wrapper functions
A hook to support rewriting f2py
internal data structures after reading all
its input files is introduced. This is required, for instance, for BC of SciPy
support where character arguments are treated as character strings arguments in
C
expressions.
(gh-19388)
IBM zSystems Vector Extension Facility (SIMD)#
Added support for SIMD extensions of zSystem (z13, z14, z15), through the universal intrinsics interface. This support leads to performance improvements for all SIMD kernels implemented using the universal intrinsics, including the following operations: rint, floor, trunc, ceil, sqrt, absolute, square, reciprocal, tanh, sin, cos, equal, not_equal, greater, greater_equal, less, less_equal, maximum, minimum, fmax, fmin, argmax, argmin, add, subtract, multiply, divide.
(gh-20913)
NumPy now gives floating point errors in casts#
In most cases, NumPy previously did not give floating point warnings or errors when these happened during casts. For examples, casts like:
np.array([2e300]).astype(np.float32) # overflow for float32
np.array([np.inf]).astype(np.int64)
Should now generally give floating point warnings. These warnings should warn that floating point overflow occurred. For errors when converting floating point values to integers users should expect invalid value warnings.
Users can modify the behavior of these warnings using np.errstate
.
Note that for float to int casts, the exact warnings that are given may be platform dependent. For example:
arr = np.full(100, fill_value=1000, dtype=np.float64)
arr.astype(np.int8)
May give a result equivalent to (the intermediate cast means no warning is given):
arr.astype(np.int64).astype(np.int8)
May return an undefined result, with a warning set:
RuntimeWarning: invalid value encountered in cast
The precise behavior is subject to the C99 standard and its implementation in both software and hardware.
(gh-21437)
F2PY supports the value attribute#
The Fortran standard requires that variables declared with the value
attribute must be passed by value instead of reference. F2PY now supports this
use pattern correctly. So integer, intent(in), value :: x
in Fortran codes
will have correct wrappers generated.
(gh-21807)
Added pickle support for third-party BitGenerators#
The pickle format for bit generators was extended to allow each bit generator
to supply its own constructor when during pickling. Previous versions of NumPy
only supported unpickling Generator
instances created with one of the core
set of bit generators supplied with NumPy. Attempting to unpickle a
Generator
that used a third-party bit generators would fail since the
constructor used during the unpickling was only aware of the bit generators
included in NumPy.
(gh-22014)
arange() now explicitly fails with dtype=str#
Previously, the np.arange(n, dtype=str)
function worked for n=1
and
n=2
, but would raise a non-specific exception message for other values of
n
. Now, it raises a TypeError informing that arange
does not support
string dtypes:
>>> np.arange(2, dtype=str)
Traceback (most recent call last)
...
TypeError: arange() not supported for inputs with DType <class 'numpy.dtype[str_]'>.
(gh-22055)
numpy.typing
protocols are now runtime checkable#
The protocols used in numpy.typing.ArrayLike
and numpy.typing.DTypeLike
are now properly marked as runtime checkable, making them easier to use for
runtime type checkers.
(gh-22357)
Performance improvements and changes#
Faster version of np.isin
and np.in1d
for integer arrays#
np.in1d
(used by np.isin
) can now switch to a faster algorithm (up to
>10x faster) when it is passed two integer arrays. This is often automatically
used, but you can use kind="sort"
or kind="table"
to force the old or
new method, respectively.
(gh-12065)
Faster comparison operators#
The comparison functions (numpy.equal
, numpy.not_equal
, numpy.less
,
numpy.less_equal
, numpy.greater
and numpy.greater_equal
) are now
much faster as they are now vectorized with universal intrinsics. For a CPU
with SIMD extension AVX512BW, the performance gain is up to 2.57x, 1.65x and
19.15x for integer, float and boolean data types, respectively (with N=50000).
(gh-21483)
Changes#
Better reporting of integer division overflow#
Integer division overflow of scalars and arrays used to provide a
RuntimeWarning
and the return value was undefined leading to crashes at
rare occasions:
>>> np.array([np.iinfo(np.int32).min]*10, dtype=np.int32) // np.int32(-1)
<stdin>:1: RuntimeWarning: divide by zero encountered in floor_divide
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32)
Integer division overflow now returns the input dtype’s minimum value and raise
the following RuntimeWarning
:
>>> np.array([np.iinfo(np.int32).min]*10, dtype=np.int32) // np.int32(-1)
<stdin>:1: RuntimeWarning: overflow encountered in floor_divide
array([-2147483648, -2147483648, -2147483648, -2147483648, -2147483648,
-2147483648, -2147483648, -2147483648, -2147483648, -2147483648],
dtype=int32)
(gh-21506)
masked_invalid
now modifies the mask in-place#
When used with copy=False
, numpy.ma.masked_invalid
now modifies the
input masked array in-place. This makes it behave identically to
masked_where
and better matches the documentation.
(gh-22046)
nditer
/NpyIter
allows all allocating all operands#
The NumPy iterator available through np.nditer
in Python and as NpyIter
in C now supports allocating all arrays. The iterator shape defaults to ()
in this case. The operands dtype must be provided, since a “common dtype”
cannot be inferred from the other inputs.
(gh-22457)