Differences with NumPy#
Concrete scalar types#
Since NumPy 2.2, the float64
and complex128
scalar types were made into concrete types. Before that, they were aliases of floating[_64Bit]
and complexfloating[_64Bit, _64Bit]
, respectively. But at runtime, floating
is not the same as float64
, so this was incorrect.
Before Numpy 2.2, type-checkers accepted the following assignment:
But now that float64
is a proper subclass of floating
, this is no longer valid. Type-checkers will therefore report this as an error.
However, many users did not like this, because it often required them to change a lot of their code. So for a smooth transition, we kept the other scalar types such as int8
as aliases to np.integer[_8Bit]
in NumPy. This is, in fact, one of the main reasons for why NumType was created.
In NumType all scalar types are annotated as concrete subtypes, or aliases thereof. That means that x: np.integer = int8
is not allowed in NumType, which in NumPy you could get away with.
To be specific, the following scalar types are proper subclasses in NumType:
int8
,int16
,int32
, andint64
uint8
,uint16
,uint32
, anduint64
float16
,float32
,float64
, andlongdouble
complex64
,complex128
, andclongdouble
The other numeric scalar types are defined as aliases, and assume a 64-bit platform.
No more NBitBase
#
The purpose of numpy.typing.NBitBase
was as upper bound to a type parameter, for use in generic abstract scalar types like floating[T]
. But the now concrete scalar types will no longer accept any floating[T]
. NBitBase
should therefore not be used anymore.
Alternatives#
Type parameters can instead use an abstract scalar type as an upper bound. So instead of
you can write
As you can see, this also makes the code more readable.
But what if that isn't possible? For instance, you might have the following function:
In that case, you can rewrite it by using typing.overload
:
import numpy as np
from typing import overload
@overload
def f(x: np.complex64) -> np.float32: ...
@overload
def f(x: np.complex128) -> np.float64: ...
@overload
def f(x: np.clongdouble) -> np.longdouble: ...
And even though it has gotten more verbose, it requires less mental to interpret.
Extended precision removals#
The following non-existent scalar types have been removed (#209):
int128
andint256
uint128
anduint256
float80
andfloat256
complex160
andcomplex512
These types will not be defined on any supported platform.
Aliases of [c]longdouble
#
The platform-dependent float96
and float128
types are equivalent aliases of longdouble
(#397):, and their complex analogues, complex192
and complex256
, alias clongdouble
(#391). This was done in order to minimize the expected amount of "but it works on my machine".
Return types of [c]longdouble.item()
and .tolist()
#
In Numpy, longdouble
and clongdouble
aren't annotated as concrete subclasses of [complex]floating
, but as aliases. A consequence of this is that their item
and tolist
methods had to return the same type as that of all other floating
and complexfloating
types, i.e. float
and complex
. But this is incorrect for longdouble
and clongdouble
:
>>> import numpy as np
>>> np.longdouble(1).item()
np.longdouble('1.0')
>>> np.clongdouble(1).item()
np.clongdouble('1+0j')
In NumType, the stubs correctly reflect this runtime behaviour.
Removed number.__floordiv__
#
The abstract numpy.number
type represents a scalar that's either integer, float, or complex. But the builtin "floordiv" operator, //
is only supported for integer and floating scalars. Complex numpy scalars will raise an error. But in NumPy, type-checkers will allow you to write the following type-unsafe code:
import numpy as np
def half(a: np.number) -> np.number:
return a // 2
half(np.complex128(1j)) # accepted
In NumType's numpy-stubs
, the numpy.number.__[r]floordiv__
methods don't exist. This means that if you have numtype
installed, your type-checker will report a // 2
as an error.
Mypy plugin#
NumType does not support the numpy mypy plugin. The reasons for this are explained in the NumPy 2.3.0 deprecation notes.