This release includes several new features as well as numerous bug fixes and refactorings. It supports Python 2.4 - 2.7 and 3.1 - 3.3 and is the last release that supports Python 2.4 - 2.5.
where= parameter to ufuncs (allows the use of boolean arrays to choose where a computation should be done)
where=
vectorize improvements (added ‘excluded’ and ‘cache’ keyword, general cleanup and bug fixes)
vectorize
numpy.random.choice (random sample generating function)
numpy.random.choice
In a future version of numpy, the functions np.diag, np.diagonal, and the diagonal method of ndarrays will return a view onto the original array, instead of producing a copy as they do now. This makes a difference if you write to the array returned by any of these functions. To facilitate this transition, numpy 1.7 produces a FutureWarning if it detects that you may be attempting to write to such an array. See the documentation for np.diagonal for details.
Similar to np.diagonal above, in a future version of numpy, indexing a record array by a list of field names will return a view onto the original array, instead of producing a copy as they do now. As with np.diagonal, numpy 1.7 produces a FutureWarning if it detects that you may be attempting to write to such an array. See the documentation for array indexing for details.
In a future version of numpy, the default casting rule for UFunc out= parameters will be changed from ‘unsafe’ to ‘same_kind’. (This also applies to in-place operations like a += b, which is equivalent to np.add(a, b, out=a).) Most usages which violate the ‘same_kind’ rule are likely bugs, so this change may expose previously undetected errors in projects that depend on NumPy. In this version of numpy, such usages will continue to succeed, but will raise a DeprecationWarning.
Full-array boolean indexing has been optimized to use a different, optimized code path. This code path should produce the same results, but any feedback about changes to your code would be appreciated.
Attempting to write to a read-only array (one with arr.flags.writeable set to False) used to raise either a RuntimeError, ValueError, or TypeError inconsistently, depending on which code path was taken. It now consistently raises a ValueError.
arr.flags.writeable
False
The <ufunc>.reduce functions evaluate some reductions in a different order than in previous versions of NumPy, generally providing higher performance. Because of the nature of floating-point arithmetic, this may subtly change some results, just as linking NumPy to a different BLAS implementations such as MKL can.
If upgrading from 1.5, then generally in 1.6 and 1.7 there have been substantial code added and some code paths altered, particularly in the areas of type resolution and buffered iteration over universal functions. This might have an impact on your code particularly if you relied on accidental behavior in the past.
Any ufunc.reduce function call, as well as other reductions like sum, prod, any, all, max and min support the ability to choose a subset of the axes to reduce over. Previously, one could say axis=None to mean all the axes or axis=# to pick a single axis. Now, one can also say axis=(#,#) to pick a list of axes for reduction.
There is a new keepdims= parameter, which if set to True, doesn’t throw away the reduction axes but instead sets them to have size one. When this option is set, the reduction result will broadcast correctly to the original operand which was reduced.
Note
The datetime API is experimental in 1.7.0, and may undergo changes in future versions of NumPy.
There have been a lot of fixes and enhancements to datetime64 compared to NumPy 1.6:
the parser is quite strict about only accepting ISO 8601 dates, with a few convenience extensions
converts between units correctly
datetime arithmetic works correctly
business day functionality (allows the datetime to be used in contexts where only certain days of the week are valid)
The notes in doc/source/reference/arrays.datetime.rst (also available in the online docs at arrays.datetime.html) should be consulted for more details.
See the new formatter parameter of the numpy.set_printoptions function.
formatter
numpy.set_printoptions
A generic sampling function has been added which will generate samples from a given array-like. The samples can be with or without replacement, and with uniform or given non-uniform probabilities.
Returns a boolean array where two arrays are element-wise equal within a tolerance. Both relative and absolute tolerance can be specified.
Axis keywords have been added to the integration and differentiation functions and a tensor keyword was added to the evaluation functions. These additions allow multi-dimensional coefficient arrays to be used in those functions. New functions for evaluating 2-D and 3-D coefficient arrays on grids or sets of points were added together with 2-D and 3-D pseudo-Vandermonde matrices that can be used for fitting.
A pad module containing functions for padding n-dimensional arrays has been added. The various private padding functions are exposed as options to a public ‘pad’ function. Example:
pad(a, 5, mode='mean')
Current modes are constant, edge, linear_ramp, maximum, mean, median, minimum, reflect, symmetric, wrap, and <function>.
constant
edge
linear_ramp
maximum
mean
median
minimum
reflect
symmetric
wrap
<function>
The function searchsorted now accepts a ‘sorter’ argument that is a permutation array that sorts the array to search.
Added experimental support for the AArch64 architecture.
New function PyArray_RequireWriteable provides a consistent interface for checking array writeability – any C code which works with arrays whose WRITEABLE flag is not known to be True a priori, should make sure to call this function before writing.
PyArray_RequireWriteable
NumPy C Style Guide added (doc/C_STYLE_GUIDE.rst.txt).
doc/C_STYLE_GUIDE.rst.txt
The function np.concatenate tries to match the layout of its input arrays. Previously, the layout did not follow any particular reason, and depended in an undesirable way on the particular axis chosen for concatenation. A bug was also fixed which silently allowed out of bounds axis arguments.
The ufuncs logical_or, logical_and, and logical_not now follow Python’s behavior with object arrays, instead of trying to call methods on the objects. For example the expression (3 and ‘test’) produces the string ‘test’, and now np.logical_and(np.array(3, ‘O’), np.array(‘test’, ‘O’)) produces ‘test’ as well.
The .base attribute on ndarrays, which is used on views to ensure that the underlying array owning the memory is not deallocated prematurely, now collapses out references when you have a view-of-a-view. For example:
.base
a = np.arange(10) b = a[1:] c = b[1:]
In numpy 1.6, c.base is b, and c.base.base is a. In numpy 1.7, c.base is a.
c.base
b
c.base.base
a
To increase backwards compatibility for software which relies on the old behaviour of .base, we only ‘skip over’ objects which have exactly the same type as the newly created view. This makes a difference if you use ndarray subclasses. For example, if we have a mix of ndarray and matrix objects which are all views on the same original ndarray:
ndarray
matrix
a = np.arange(10) b = np.asmatrix(a) c = b[0, 1:] d = c[0, 1:]
then d.base will be b. This is because d is a matrix object, and so the collapsing process only continues so long as it encounters other matrix objects. It considers c, b, and a in that order, and b is the last entry in that list which is a matrix object.
d.base
d
c
Casting rules have undergone some changes in corner cases, due to the NA-related work. In particular for combinations of scalar+scalar:
the longlong type (q) now stays longlong for operations with any other number (? b h i l q p B H I), previously it was cast as int_ (l). The ulonglong type (Q) now stays as ulonglong instead of uint (L).
the timedelta64 type (m) can now be mixed with any integer type (b h i l q p B H I L Q P), previously it raised TypeError.
For array + scalar, the above rules just broadcast except the case when the array and scalars are unsigned/signed integers, then the result gets converted to the array type (of possibly larger size) as illustrated by the following examples:
>>> (np.zeros((2,), dtype=np.uint8) + np.int16(257)).dtype dtype('uint16') >>> (np.zeros((2,), dtype=np.int8) + np.uint16(257)).dtype dtype('int16') >>> (np.zeros((2,), dtype=np.int16) + np.uint32(2**17)).dtype dtype('int32')
Whether the size gets increased depends on the size of the scalar, for example:
>>> (np.zeros((2,), dtype=np.uint8) + np.int16(255)).dtype dtype('uint8') >>> (np.zeros((2,), dtype=np.uint8) + np.int16(256)).dtype dtype('uint16')
Also a complex128 scalar + float32 array is cast to complex64.
complex128
float32
complex64
In NumPy 1.7 the datetime64 type (M) must be constructed by explicitly specifying the type as the second argument (e.g. np.datetime64(2000, 'Y')).
np.datetime64(2000, 'Y')
Specifying a custom string formatter with a _format array attribute is deprecated. The new formatter keyword in numpy.set_printoptions or numpy.array2string can be used instead.
numpy.array2string
The deprecated imports in the polynomial package have been removed.
concatenate now raises DepractionWarning for 1D arrays if axis != 0. Versions of numpy < 1.7.0 ignored axis argument value for 1D arrays. We allow this for now, but in due course we will raise an error.
concatenate
axis != 0
Direct access to the fields of PyArrayObject* has been deprecated. Direct access has been recommended against for many releases. Expect similar deprecations for PyArray_Descr* and other core objects in the future as preparation for NumPy 2.0.
The macros in old_defines.h are deprecated and will be removed in the next major release (>= 2.0). The sed script tools/replace_old_macros.sed can be used to replace these macros with the newer versions.
You can test your code against the deprecated C API by adding a line composed of #define NPY_NO_DEPRECATED_API and the target version number, such as NPY_1_7_API_VERSION, before including any NumPy headers.
#define NPY_NO_DEPRECATED_API
NPY_1_7_API_VERSION
The NPY_CHAR member of the NPY_TYPES enum is deprecated and will be removed in NumPy 1.8. See the discussion at gh-2801 for more details.
NPY_CHAR
NPY_TYPES