NumPy QuadDType#
A cross-platform 128-bit (quadruple precision) floating-point data type for NumPy.
NumPy QuadDType provides IEEE 754 quadruple-precision (binary128) floating-point arithmetic as a first-class NumPy dtype, enabling high-precision numerical computations that go beyond the standard 64-bit double precision.
Key Features#
128-bit floating point with ~34 decimal digits of precision
Works seamlessly with NumPy arrays, ufuncs, and broadcasting.
Vectorization-friendly design that can leverage SIMD acceleration where supported.
Full suite of math functions: trigonometric, exponential, logarithmic, and more.
Choose between SLEEF (default) or native longdouble backends.
Full support for Python’s free-threading (GIL-free) mode.
Quick Start#
Installation#
pip install numpy-quaddtype
Or with conda-forge:
conda install numpy_quaddtype
Or with mamba:
mamba install numpy_quaddtype
Basic Usage#
import numpy as np
from numpy_quaddtype import QuadPrecision, QuadPrecDType
# Create a quad-precision scalar
x = QuadPrecision("3.14159265358979323846264338327950288")
# Create a quad-precision array
arr = np.array([1, 2, 3], dtype=QuadPrecDType())
# Use NumPy functions
result = np.sin(arr)
print(result)
Why Quad Precision?#
Standard double precision (float64) provides approximately 15-16 significant decimal digits. While sufficient for most applications, some scenarios require higher precision:
Numerical Analysis: Ill-conditioned problems, iterative algorithms
Scientific Computing: Astronomy, physics simulations requiring extreme accuracy
Financial Calculations: High-precision arithmetic for regulatory compliance
Validation: Checking accuracy of lower-precision implementations
Additionally, NumPy’s existing np.longdouble or np.float128 (alias for np.longdouble) suffers from cross-platform inconsistency: it is 64-bit on Windows and macOS, 80-bit on Linux x86, and varies on other architectures. NumPy QuadDType solves this by providing true IEEE 754 quadruple precision (128-bit) consistently across all platforms.
For more details on the motivation and technical implementation, see the Quansight Labs blog post.