NumPy 1.23.0 Release Notes#
New functions#
ndenumerate specialization for masked arrays#
The masked array module now provides the numpy.ma.ndenumerate
function,
an alternative to numpy.ndenumerate
that skips masked values by default.
(gh-20020)
NumPy now supports the DLPack protocol#
numpy.from_dlpack
has been added to NumPy to exchange data using the DLPack protocol.
It accepts Python objects that implement the __dlpack__
and __dlpack_device__
methods and returns a ndarray object which is generally the view of the data of the input
object.
(gh-21145)
Deprecations#
Setting
__array_finalize__
toNone
is deprecated. It must now be a method and may wish to callsuper().__array_finalize__(obj)
after checking forNone
or if the NumPy version is sufficiently new.(gh-20766)
Deprecate PyDataMem_SetEventHook#
The ability to track allocations is now built-in to python via tracemalloc
.
The hook function PyDataMem_SetEventHook
has been deprecated and the
demonstration of its use in tool/allocation_tracking has been removed.
(gh-20394)
Deprecation of numpy.distutils
#
numpy.distutils
has been deprecated, as a result of distutils
itself being deprecated. It will not be present in NumPy for Python >= 3.12,
and will be removed completely 2 years after the release of Python 3.12
For more details, see Status of numpy.distutils and migration advice.
(gh-20875)
Expired deprecations#
alen
and asscalar
removed#
The deprecated np.alen
and np.asscalar
functions were removed.
(gh-20414)
Remove deprecated NPY_ARRAY_UPDATEIFCOPY
#
The array flag UPDATEIFCOPY
and enum NPY_ARRAY_UPDATEIFCOPY
were
deprecated in 1.14. They were replaced by WRITEBACKIFCOPY
which require
calling PyArray_ResoveWritebackIfCopy
before the array is deallocated. Also
removed the associated (and deprecated) PyArray_XDECREF_ERR
.
(gh-20589)
Changing to dtype of different size in F-contiguous arrays no longer permitted#
Behavior deprecated in NumPy 1.11.0 allowed the following counterintuitive result:
>>> x = np.array(["aA", "bB", "cC", "dD", "eE", "fF"]).reshape(1, 2, 3).transpose()
>>> x.view('U1') # deprecated behavior, shape (6, 2, 1)
DeprecationWarning: ...
array([[['a'],
['d']],
[['A'],
['D']],
[['b'],
['e']],
[['B'],
['E']],
[['c'],
['f']],
[['C'],
['F']]], dtype='<U1')
Now that the deprecation has expired, dtype reassignment only happens along the last axis, so the above will result in:
>>> x.view('U1') # new behavior, shape (3, 2, 2)
array([[['a', 'A'],
['d', 'D']],
[['b', 'B'],
['e', 'E']],
[['c', 'C'],
['f', 'F']]], dtype='<U1')
When the last axis is not contiguous, an error is now raised in place of the DeprecationWarning:
>>> x = np.array(["aA", "bB", "cC", "dD", "eE", "fF"]).reshape(2, 3).transpose()
>>> x.view('U1')
ValueError: To change to a dtype of a different size, the last axis must be contiguous
The new behavior is equivalent to the more intuitive:
>>> x.copy().view('U1')
To replicate the old behavior on F-but-not-C-contiguous arrays, use:
>>> x.T.view('U1').T
(gh-20722)
Exceptions will be raised during array-like creation#
When an object raised an exception during access of the special
attributes __array__
or __array_interface__
, this exception
was usually ignored.
This behaviour was deprecated in 1.21, and the exception will now be raised.
(gh-20835)
Expired deprecation of multidimensional indexing with non-tuple values#
Multidimensional indexing with anything but a tuple was deprecated in NumPy 1.15.
Previously, code such as arr[ind]
where ind = [[0, 1], [0, 1]]
produced a FutureWarning
and was interpreted as a multidimensional
index (i.e., arr[tuple(ind)]
). Now this example is treated like an
array index over a single dimension (arr[array(ind)]
).
(gh-21029)
Compatibility notes#
1D np.linalg.norm
preserves float input types, even for scalar results#
Previously, this would promote to float64
when the ord
argument was
not one of the explicitly listed values, e.g. ord=3
:
>>> f32 = np.float32([1, 2])
>>> np.linalg.norm(f32, 2).dtype
dtype('float32')
>>> np.linalg.norm(f32, 3)
dtype('float64') # numpy 1.22
dtype('float32') # numpy 1.23
This change affects only float32
and float16
vectors with ord
other than -Inf
, 0
, 1
, 2
, and Inf
.
(gh-17709)
NPY_RELAXED_STRIDES_CHECKING
has been removed#
NumPy cannot be compiled with NPY_RELAXED_STRIDES_CHECKING=0
anymore. Relaxed strides have been the default for many years and
the option was initially introduced to allow a smoother transition.
(gh-20220)
np.loadtxt
has recieved several changes#
The row counting of numpy.loadtxt
was fixed. loadtxt
ignores fully
empty lines in the file, but counted them towards max_rows
.
When max_rows
is used and the file contains empty lines, these will now
not be counted. Previously, it was possible that the result contained fewer
than max_rows
rows even though more data was available to be read.
If the old behaviour is required, itertools.islice
may be used:
import itertools
lines = itertools.islice(open("file"), 0, max_rows)
result = np.loadtxt(lines, ...)
While generally much faster and improved, numpy.loadtxt
may now fail to
converter certain strings to numbers that were previously successfully read.
The most important cases for this are:
Parsing floating point values such as
1.0
into integers will now failParsing hexadecimal floats such as
0x3p3
will failAn
_
was previously accepted as a thousands delimiter100_000
. This will now result in an error.
If you experience these limitations, they can all be worked around by passing
appropriate converters=
. NumPy now supports passing a single converter
to be used for all columns to make this more convenient.
For example, converters=float.fromhex
can read hexadecimal float numbers
and converters=int
will be able to read 100_000
.
Further, the error messages have been generally improved. However, this means
that error types may differ. In particularly, a ValueError
is now always
raised when parsing of a single entry fails.
(gh-20580)
New Features#
crackfortran has support for operator and assignment overloading#
crackfortran
parser now understands operator and assignment
definitions in a module. They are added in the body
list of the
module which contains a new key implementedby
listing the names
of the subroutines or functions implementing the operator or
assignment.
(gh-15006)
f2py supports reading access type attributes from derived type statements#
As a result, one does not need to use public or private statements to specify derived type access properties.
(gh-15844)
New parameter ndmin
added to genfromtxt
#
This parameter behaves the same as ndmin
from loadtxt
.
(gh-20500)
np.loadtxt
now supports quote character and single converter function#
numpy.loadtxt
now supports an additional quotechar
keyword argument
which is not set by default. Using quotechar='"'
will read quoted fields
as used by the Excel CSV dialect.
Further, it is now possible to pass a single callable rather than a dictionary
for the converters
argument.
(gh-20580)
Changing to dtype of a different size now requires contiguity of only the last axis#
Previously, viewing an array with a dtype of a different itemsize required that the entire array be C-contiguous. This limitation would unnecessarily force the user to make contiguous copies of non-contiguous arrays before being able to change the dtype.
This change affects not only ndarray.view
, but other construction
mechanisms, including the discouraged direct assignment to ndarray.dtype
.
This change expires the deprecation regarding the viewing of F-contiguous arrays, described elsewhere in the release notes.
(gh-20722)
deterministic output files for F2PY#
For F77 inputs, f2py
will generate modname-f2pywrappers.f
unconditionally, though these may be empty. For free-form inputs,
modname-f2pywrappers.f
, modname-f2pywrappers2.f90
will both be generated
unconditionally, and may be empty. This allows writing generic output rules in
cmake
or meson
and other build systems. Older behavior can be restored
by passing --skip-empty-wrappers
to f2py
. Using via meson details usage.
(gh-21187)
keepdims
parameter for average
#
The parameter keepdims
was added to the functions numpy.average
and numpy.ma.average
. The parameter has the same meaning as it
does in reduction functions such as numpy.sum
or numpy.mean
.
(gh-21485)
Improvements#
ndarray.__array_finalize__
is now callable#
This means subclasses can now use super().__array_finalize__(obj)
without worrying whether ndarray
is their superclass or not.
The actual call remains a no-op.
(gh-20766)
Add support for VSX4/Power10#
With VSX4/Power10 enablement, the new instructions available in Power ISA 3.1 can be used to accelerate some NumPy operations, e.g., floor_divide, modulo, etc.
(gh-20821)
np.fromiter
now accepts objects and subarrays#
The fromiter
function now supports object and
subarray dtypes. Please see he function documentation for
examples.
(gh-20993)
Math C library feature detection now uses correct signatures#
Compiling is preceded by a detection phase to determine whether the
underlying libc supports certain math operations. Previously this code
did not respect the proper signatures. Fixing this enables compilation
for the wasm-ld
backend (compilation for web assembly) and reduces
the number of warnings.
(gh-21154)
np.kron
now maintains subclass information#
np.kron
maintains subclass information now such as masked arrays
while computing the Kronecker product of the inputs
>>> x = ma.array([[1, 2], [3, 4]], mask=[[0, 1], [1, 0]])
>>> np.kron(x,x)
masked_array(
data=[[1, --, --, --],
[--, 4, --, --],
[--, --, 4, --],
[--, --, --, 16]],
mask=[[False, True, True, True],
[ True, False, True, True],
[ True, True, False, True],
[ True, True, True, False]],
fill_value=999999)
Warning
np.kron
output now follows ufunc
ordering (multiply
)
to determine the output class type
>>> class myarr(np.ndarray):
>>> __array_priority__ = -1
>>> a = np.ones([2, 2])
>>> ma = myarray(a.shape, a.dtype, a.data)
>>> type(np.kron(a, ma)) == np.ndarray
False # Before it was True
>>> type(np.kron(a, ma)) == myarr
True
(gh-21262)
Performance improvements and changes#
Faster np.loadtxt
#
numpy.loadtxt
is now generally much faster than previously as most of it
is now implemented in C.
(gh-20580)
Faster reduction operators#
Reduction operations like numpy.sum
, numpy.prod
, numpy.add.reduce
,
numpy.logical_and.reduce
on contiguous integer-based arrays are now
much faster.
(gh-21001)
Faster np.where
#
numpy.where
is now much faster than previously on unpredictable/random
input data.
(gh-21130)
Faster operations on NumPy scalars#
Many operations on NumPy scalars are now significantly faster, although rare operations (e.g. with 0-D arrays rather than scalars) may be slower in some cases. However, even with these improvements users who want the best performance for their scalars, may want to convert a known NumPy scalar into a Python one using scalar.item().
(gh-21188)
Faster np.kron
#
numpy.kron
is about 80% faster as the product is now computed
using broadcasting.
(gh-21354)