numpy.polynomial.chebyshev.chebfit#
- polynomial.chebyshev.chebfit(x, y, deg, rcond=None, full=False, w=None)[source]#
Least squares fit of Chebyshev series to data.
Return the coefficients of a Chebyshev series of degree deg that is the least squares fit to the data values y given at points x. If y is 1-D the returned coefficients will also be 1-D. If y is 2-D multiple fits are done, one for each column of y, and the resulting coefficients are stored in the corresponding columns of a 2-D return. The fitted polynomial(s) are in the form
\[p(x) = c_0 + c_1 * T_1(x) + ... + c_n * T_n(x),\]where n is deg.
- Parameters:
- xarray_like, shape (M,)
x-coordinates of the M sample points
(x[i], y[i])
.- yarray_like, shape (M,) or (M, K)
y-coordinates of the sample points. Several data sets of sample points sharing the same x-coordinates can be fitted at once by passing in a 2D-array that contains one dataset per column.
- degint or 1-D array_like
Degree(s) of the fitting polynomials. If deg is a single integer, all terms up to and including the deg’th term are included in the fit. For NumPy versions >= 1.11.0 a list of integers specifying the degrees of the terms to include may be used instead.
- rcondfloat, optional
Relative condition number of the fit. Singular values smaller than this relative to the largest singular value will be ignored. The default value is
len(x)*eps
, where eps is the relative precision of the float type, about 2e-16 in most cases.- fullbool, optional
Switch determining nature of return value. When it is False (the default) just the coefficients are returned, when True diagnostic information from the singular value decomposition is also returned.
- warray_like, shape (M,), optional
Weights. If not None, the weight
w[i]
applies to the unsquared residualy[i] - y_hat[i]
atx[i]
. Ideally the weights are chosen so that the errors of the productsw[i]*y[i]
all have the same variance. When using inverse-variance weighting, usew[i] = 1/sigma(y[i])
. The default value is None.New in version 1.5.0.
- Returns:
- coefndarray, shape (M,) or (M, K)
Chebyshev coefficients ordered from low to high. If y was 2-D, the coefficients for the data in column k of y are in column k.
- [residuals, rank, singular_values, rcond]list
These values are only returned if
full == True
residuals – sum of squared residuals of the least squares fit
rank – the numerical rank of the scaled Vandermonde matrix
singular_values – singular values of the scaled Vandermonde matrix
rcond – value of rcond.
For more details, see
numpy.linalg.lstsq
.
- Warns:
- RankWarning
The rank of the coefficient matrix in the least-squares fit is deficient. The warning is only raised if
full == False
. The warnings can be turned off by>>> import warnings >>> warnings.simplefilter('ignore', np.exceptions.RankWarning)
See also
numpy.polynomial.polynomial.polyfit
numpy.polynomial.legendre.legfit
numpy.polynomial.laguerre.lagfit
numpy.polynomial.hermite.hermfit
numpy.polynomial.hermite_e.hermefit
chebval
Evaluates a Chebyshev series.
chebvander
Vandermonde matrix of Chebyshev series.
chebweight
Chebyshev weight function.
numpy.linalg.lstsq
Computes a least-squares fit from the matrix.
scipy.interpolate.UnivariateSpline
Computes spline fits.
Notes
The solution is the coefficients of the Chebyshev series p that minimizes the sum of the weighted squared errors
\[E = \sum_j w_j^2 * |y_j - p(x_j)|^2,\]where \(w_j\) are the weights. This problem is solved by setting up as the (typically) overdetermined matrix equation
\[V(x) * c = w * y,\]where V is the weighted pseudo Vandermonde matrix of x, c are the coefficients to be solved for, w are the weights, and y are the observed values. This equation is then solved using the singular value decomposition of V.
If some of the singular values of V are so small that they are neglected, then a
RankWarning
will be issued. This means that the coefficient values may be poorly determined. Using a lower order fit will usually get rid of the warning. The rcond parameter can also be set to a value smaller than its default, but the resulting fit may be spurious and have large contributions from roundoff error.Fits using Chebyshev series are usually better conditioned than fits using power series, but much can depend on the distribution of the sample points and the smoothness of the data. If the quality of the fit is inadequate splines may be a good alternative.
References
[1]Wikipedia, “Curve fitting”, https://en.wikipedia.org/wiki/Curve_fitting