Codes¶
sysidentpy base¶
Base classes for NARX estimator.
-
class
sysidentpy.base.
GenerateRegressors
[source]¶ Polynomial NARMAX model
Methods
regressor_space
(self, non_degree, xlag, …)Create the code representation of the regressors.
-
regressor_space
(self, non_degree, xlag, ylag, n_inputs)[source]¶ Create the code representation of the regressors.
This function generates a codification from all possibles regressors given the maximum lag of the input and output. This is used to write the final terms of the model in a readable form. [1001] -> y(k-1). This code format was based on a dissertation from UFMG. See reference below.
- Parameters
non_degree (int) – The desired maximum nonlinearity degree.
ylag (int) – The maximum lag of output regressors.
xlag (int) – The maximum lag of input regressors.
- Returns
max_lag (int) – This value can be used by another functions.
regressor_code (ndarray of int) – Matrix codification of all possible regressors.
Examples
The codification is defined as:
>>> 100n = y(k-n) >>> 200n = u(k-n) >>> [100n 100n] = y(k-n)y(k-n) >>> [200n 200n] = u(k-n)u(k-n)
References
[1]`Master Thesis: Barbosa, Alípio Monteiro. “Técnicas de otimizaçao bi-objetivo para a determinaçao
- da estrutura de modelos NARX.” (2010).
<https://repositorio.ufmg.br/bitstream/1843/BUOS-8EXJV3/1/7m.pdf>`_
-
-
class
sysidentpy.base.
InformationMatrix
[source]¶ Class for methods regarding preprocessing of columns
Methods
build_information_matrix
(self, X, y, xlag, …)Build the information matrix.
initial_lagged_matrix
(self, X, y, xlag, ylag)Build a lagged matrix concerning each lag for each column.
shift_column
(self, col_to_shift, lag)Shift values based on a lag.
-
shift_column
(self, col_to_shift, lag)[source]¶ Shift values based on a lag.
- Parameters
col_to_shift (array-like of shape = n_samples) – The samples of the input or output.
lag (int) – The respective lag of the regressor.
- Returns
tmp_column – The shifted array of the input or output.
- Return type
array-like of shape = n_samples
Examples
>>> y = [1, 2, 3, 4, 5] >>> shift_column(y, 1) [0, 1, 2, 3, 4]
-
initial_lagged_matrix
(self, X, y, xlag, ylag)[source]¶ Build a lagged matrix concerning each lag for each column.
- Parameters
model (ndarray of int) – The model code representation.
y (array-like) – Target data used on training phase.
X (array-like) – Input data used on training phase.
ylag (int) – The maximum lag of output regressors.
xlag (int) – The maximum lag of input regressors.
- Returns
lagged_data – The lagged matrix built in respect with each lag and column.
- Return type
ndarray of floats
Examples
Let X and y be the input and output values of shape Nx1. If the chosen lags are 2 for both input and output the initial lagged matrix will be formed by Y[k-1], Y[k-2], X[k-1], and X[k-2].
-
build_information_matrix
(self, X, y, xlag, ylag, non_degree)[source]¶ Build the information matrix.
Each columns of the information matrix represents a candidate regressor. The set of candidate regressors are based on xlag, ylag, and non_degree entered by the user.
- Parameters
model (ndarray of int) – The model code representation.
y (array-like) – Target data used on training phase.
X (array-like) – Input data used on training phase.
ylag (int) – The maximum lag of output regressors.
xlag (int) – The maximum lag of input regressors.
non_degree (int) – The desired maximum nonlinearity degree.
- Returns
The lagged matrix built in respect with each lag and column.
- Return type
lagged_data = ndarray of floats
-
sysidentpy main¶
sysidentpy residues¶
-
class
sysidentpy.residues.residues_correlation.
ResiduesAnalysis
[source]¶ Bases:
object
Residues analysis for Polynomial NARX model.
Methods
plot_result
(self, y, yhat, e_acf, xe_ccf[, …])Plot the free run simulation and residues analysis.
residuals
(self, X, y, yhat)Performs the residual analysis of output to validate model.
-
residuals
(self, X, y, yhat)[source]¶ Performs the residual analysis of output to validate model.
- Parameters
y (array-like of shape = n_samples) – The target data used in the identification process.
yhat (array-like of shape = n_samples) – The prediction values of the identification process.
X (ndarray of floats) – The input data.
- Returns
output_autocorr (ndarray of floats:) – 1st column - Residuals normalized autocorrelation. 2nd/3rd columns - Superior and inferior limits of a 95% confidence interval.
output_crosscorr (ndarray of floats:) – 1st column - Correlation between residuals and input. 2nd/3rd columns - Superior and inferior limits of a 95% confidence interval.
References
- [1] `Wikipedia entry on the Autocorrelation
Examples
>>> y = [3, -0.5, 2, 7] >>> autocorr(y) [62.25 11.5 2.5 21. ]
-
_normalized_correlation
(self, signal1, signal2)[source]¶ Compute the normalized correlation between two signals.
- Parameters
signal1 (array-like of shape = n_samples.) –
signal2 (array-like of shape = n_samples.) –
- Returns
ruy – The normalized cross correlation between the two signals.
- Return type
ndarray of floats:
-
plot_result
(self, y, yhat, e_acf, xe_ccf, figsize=10, 8, n=100)[source]¶ Plot the free run simulation and residues analysis.
- Parameters
y (array-like of shape = n_samples) – The target data used in the identification process.
yhat (array-like of shape = n_samples) – The prediction values of the identification process.
e_acf (ndarray of floats:) – 1st column - Residuals normalized autocorrelation. 2nd/3rd columns - Superior and inferior limits of a 95% confidence interval.
xe_ccf (ndarray of floats:) – 1st column - Correlation between residuals and input. 2nd/3rd columns - Superior and inferior limits of a 95% confidence interval.
-
__dict__
= mappingproxy({'__module__': 'sysidentpy.residues.residues_correlation', '__doc__': 'Residues analysis for Polynomial NARX model.', 'residuals': <function ResiduesAnalysis.residuals>, '_input_ccf': <function ResiduesAnalysis._input_ccf>, '_residuals_acf': <function ResiduesAnalysis._residuals_acf>, '_normalized_correlation': <function ResiduesAnalysis._normalized_correlation>, 'plot_result': <function ResiduesAnalysis.plot_result>, '__dict__': <attribute '__dict__' of 'ResiduesAnalysis' objects>, '__weakref__': <attribute '__weakref__' of 'ResiduesAnalysis' objects>})¶
-
__module__
= 'sysidentpy.residues.residues_correlation'¶
-
__weakref__
¶ list of weak references to the object (if defined)
-
sysidentpy metrics¶
Common metrics to assess performance on NARX models.
-
sysidentpy.metrics._regression.
forecast_error
(y, y_predicted)[source]¶ Calculate the forecast error in a regression model.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – The difference between the true target values and the predicted or forecast value in regression or any other phenomenon.
- Return type
ndarray of floats
References
- [1] `Wikipedia entry on the Forecast error
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> forecast_error(y, y_predicted) [0.5, -0.5, 0, -1]
-
sysidentpy.metrics._regression.
mean_forecast_error
(y, y_predicted)[source]¶ Calculate the mean of forecast error of a regression model.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – The mean value of the difference between the true target values and the predicted or forecast value in regression or any other phenomenon.
- Return type
float
References
- [1] `Wikipedia entry on the Forecast error
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> mean_forecast_error(y, y_predicted) -0.25
-
sysidentpy.metrics._regression.
mean_squared_error
(y, y_predicted)[source]¶ Calculate the Mean Squared Error.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – MSE output is non-negative values. Becoming 0.0 means your model outputs are exactly matched by true target values.
- Return type
float
References
- [1] `Wikipedia entry on the Mean Squared Error
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> mean_squared_error(y, y_predicted) 0.375
-
sysidentpy.metrics._regression.
root_mean_squared_error
(y, y_predicted)[source]¶ Calculate the Root Mean Squared Error.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – RMSE output is non-negative values. Becoming 0.0 means your model outputs are exactly matched by true target values.
- Return type
float
References
- [1] `Wikipedia entry on the Root Mean Squared Error
<https://en.wikipedia.org/wiki/Root-mean-square_deviation>`_
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> root_mean_squared_error(y, y_predicted) 0.612
-
sysidentpy.metrics._regression.
normalized_root_mean_squared_error
(y, y_predicted)[source]¶ Calculate the normalized Root Mean Squared Error.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – nRMSE output is non-negative values. Becoming 0.0 means your model outputs are exactly matched by true target values.
- Return type
float
References
- [1] `Wikipedia entry on the normalized Root Mean Squared Error
<https://en.wikipedia.org/wiki/Root-mean-square_deviation>`_
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> normalized_root_mean_squared_error(y, y_predicted) 0.081
-
sysidentpy.metrics._regression.
root_relative_squared_error
(y, y_predicted)[source]¶ Calculate the Root Relative Mean Squared Error.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – RRSE output is non-negative values. Becoming 0.0 means your model outputs are exactly matched by true target values.
- Return type
float
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> root_relative_mean_squared_error(y, y_predicted) 0.206
-
sysidentpy.metrics._regression.
mean_absolute_error
(y, y_predicted)[source]¶ Calculate the Mean absolute error.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – MAE output is non-negative values. Becoming 0.0 means your model outputs are exactly matched by true target values.
- Return type
float or ndarray of floats
References
- [1] `Wikipedia entry on the Mean absolute error
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> mean_absolute_error(y, y_predicted) 0.5
-
sysidentpy.metrics._regression.
mean_squared_log_error
(y, y_predicted)[source]¶ Calculate the Mean Squared Logarithmic Error.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – MSLE output is non-negative values. Becoming 0.0 means your model outputs are exactly matched by true target values.
- Return type
float
Examples
>>> y = [3, 5, 2.5, 7] >>> y_predicted = [2.5, 5, 4, 8] >>> mean_squared_log_error(y, y_predicted) 0.039
-
sysidentpy.metrics._regression.
median_absolute_error
(y, y_predicted)[source]¶ Calculate the Median Absolute Error.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – MdAE output is non-negative values. Becoming 0.0 means your model outputs are exactly matched by true target values.
- Return type
float
References
- [1] `Wikipedia entry on the Median absolute deviation
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> median_absolute_error(y, y_predicted) 0.5
-
sysidentpy.metrics._regression.
explained_variance_score
(y, y_predicted)[source]¶ Calculate the Explained Variance Score.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – EVS output is non-negative values. Becoming 1.0 means your model outputs are exactly matched by true target values. Lower values means worse results.
- Return type
float
References
- [1] `Wikipedia entry on the Explained Variance
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> explained_variance_score(y, y_predicted) 0.957
-
sysidentpy.metrics._regression.
r2_score
(y, y_predicted)[source]¶ Calculate the R2 score.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – R2 output can be non-negative values or negative value. Becoming 1.0 means your model outputs are exactly matched by true target values. Lower values means worse results.
- Return type
float
Notes
This is not a symmetric function.
References
- [1] `Wikipedia entry on the Coefficient of determination
<https://en.wikipedia.org/wiki/Coefficient_of_determination>`_
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> explained_variance_score(y, y_predicted) 0.948
-
sysidentpy.metrics._regression.
symmetric_mean_absolute_percentage_error
(y, y_predicted)[source]¶ Calculate the SMAPE score.
- Parameters
y (array-like of shape = number_of_outputs) – Represent the target values.
y_predicted (array-like of shape = number_of_outputs) – Target values predicted by the model.
- Returns
loss – SMAPE output is a non-negative value. The results are percentages values.
- Return type
float
Notes
One supposed problem with SMAPE is that it is not symmetric since over-forecasts and under-forecasts are not treated equally.
References
- [1] `Wikipedia entry on the Symmetric mean absolute percentage error
<https://en.wikipedia.org/wiki/Symmetric_mean_absolute_percentage_error>`_
Examples
>>> y = [3, -0.5, 2, 7] >>> y_predicted = [2.5, 0.0, 2, 8] >>> symmetric_mean_absolute_percentage_error(y, y_predicted) 57.87
sysidentpy estimators¶
Least Squares Methodos for parameter estimation
-
class
sysidentpy.parameter_estimation.estimators.
Estimators
(aux_lag=1, lam=0.98, delta=0.01, offset_covariance=0.2, mu=0.01, eps=2.220446049250313e-16, gama=0.2, weight=0.02)[source]¶ Oridanry Least squares for linear parameter estimation
Methods
affine_least_mean_squares
(self, psi, y)Estimate the model parameters using the Affine Least Mean Squares.
least_mean_squares
(self, psi, y)Estimate the model parameters using the Least Mean Squares filter.
least_mean_squares_fourth
(self, psi, y)Parameter estimation using the LMS Fourth filter.
least_mean_squares_leaky
(self, psi, y)Parameter estimation using the Leaky LMS filter.
least_mean_squares_mixed_norm
(self, psi, y)Parameter estimation using the Mixed-norm LMS filter.
least_mean_squares_normalized_leaky
(self, psi, y)Parameter estimation using the Normalized Leaky LMS filter.
Parameter estimation using the Normalized Sign-Error LMS filter.
Parameter estimation using the Normalized Sign-Regressor LMS filter.
Parameter estimation using the Normalized Sign-Sign LMS filter.
least_mean_squares_sign_error
(self, psi, y)Parameter estimation using the Sign-Error Least Mean Squares filter.
least_mean_squares_sign_regressor
(self, psi, y)Parameter estimation using the Sign-Regressor LMS filter.
least_mean_squares_sign_sign
(self, psi, y)Parameter estimation using the Sign-Sign LMS filter.
least_squares
(self, psi, y)Estimate the model parameters using Least Squares method.
normalized_least_mean_squares
(self, psi, y)Parameter estimation using the Normalized Least Mean Squares filter.
recursive_least_squares
(self, psi, y)Estimate the model parameters using the Recursive Least Squares method.
total_least_squares
(self, psi, y)Estimate the model parameters using Total Least Squares method.
-
least_squares
(self, psi, y)[source]¶ Estimate the model parameters using Least Squares method.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
References
- [1]`Manuscript: Sorenson, H. W. (1970). Least-squares estimation:
from Gauss to Kalman. IEEE spectrum, 7(7), 63-68. <http://pzs.dstu.dp.ua/DataMining/mls/bibl/Gauss2Kalman.pdf>`_
- [2]`Book (Portuguese): Aguirre, L. A. (2007). Introduçaoa identificaçao
de sistemas: técnicas lineares enao-lineares aplicadas a sistemas reais. Editora da UFMG. 3a ediçao. <https://books.google.com.br/books?hl=pt-BR&lr=&id=f9IwE7Ph0fYC&oi=fnd&pg=PA2&dq=Introdu%C3%A7%C3%A3o+%C3%A0+identifica%C3%A7%C3%A3o+de+sistemas+-+T%C3%A9cnicas+lineares+e+n%C3%A3o-lineares+aplicadas+a+sistemas+reais&ots=Qiyc4VsMdt&sig=6gumj1AEWh_b0tUGR4quI5oETUA#v=onepage&q=Introdu%C3%A7%C3%A3o%20%C3%A0%20identifica%C3%A7%C3%A3o%20de%20sistemas%20-%20T%C3%A9cnicas%20lineares%20e%20n%C3%A3o-lineares%20aplicadas%20a%20sistemas%20reais&f=false>`_
- [3]`Manuscript: Markovsky, I., & Van Huffel, S. (2007).
Overview of total least-squares methods. Signal processing, 87(10), 2283-2302. <https://eprints.soton.ac.uk/263855/1/tls_overview.pdf>`_
- [4]`Wikipedia entry on Least Squares
-
total_least_squares
(self, psi, y)[source]¶ Estimate the model parameters using Total Least Squares method.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
References
- [1]`Manuscript: Golub, G. H., & Van Loan, C. F. (1980).
An analysis of the total least squares problem. SIAM journal on numerical analysis, 17(6), 883-893. <https://epubs.siam.org/doi/pdf/10.1137/0717073?casa_token=218O16LygKkAAAAA:GyssnBnNEWzVg2Wvbmu5K1pj-XwkzpTSknUsddVTZfEJafpKANUstMuRDyJjIdcTgO-tFuQYb4Y>`_
- [2]`Manuscript: Markovsky, I., & Van Huffel, S. (2007).
Overview of total least-squares methods. Signal processing, 87(10), 2283-2302. <https://eprints.soton.ac.uk/263855/1/tls_overview.pdf>`_
- [3]`Wikipedia entry on Total Least Squares
-
recursive_least_squares
(self, psi, y)[source]¶ Estimate the model parameters using the Recursive Least Squares method.
The implementation consider the forgeting factor. :param psi: The information matrix of the model. :type psi: ndarray of floats :param y_train: The data used to training the model. :type y_train: array-like of shape = y_training
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book (Portuguese): Aguirre, L. A. (2007). Introduçaoa identificaçao
de sistemas: técnicas lineares enao-lineares aplicadas a sistemas reais. Editora da UFMG. 3a ediçao. <https://books.google.com.br/books?hl=pt-BR&lr=&id=f9IwE7Ph0fYC&oi=fnd&pg=PA2&dq=Introdu%C3%A7%C3%A3o+%C3%A0+identifica%C3%A7%C3%A3o+de+sistemas+-+T%C3%A9cnicas+lineares+e+n%C3%A3o-lineares+aplicadas+a+sistemas+reais&ots=Qiyc4VsMdt&sig=6gumj1AEWh_b0tUGR4quI5oETUA#v=onepage&q=Introdu%C3%A7%C3%A3o%20%C3%A0%20identifica%C3%A7%C3%A3o%20de%20sistemas%20-%20T%C3%A9cnicas%20lineares%20e%20n%C3%A3o-lineares%20aplicadas%20a%20sistemas%20reais&f=false>`_
-
affine_least_mean_squares
(self, psi, y)[source]¶ Estimate the model parameters using the Affine Least Mean Squares.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Poularikas, A. D. (2017). Adaptive filtering: Fundamentals
of least mean squares with MATLAB®. CRC Press. <https://books.google.com.br/books?hl=pt-BR&lr=&id=OJPSBQAAQBAJ&oi=fnd&pg=PP1&dq=adaptive+filtering+fundamentals+of+least+mean+squares+with+matlab&ots=dMNzB_2erC&sig=7l0VvIm9-GwUDgj0xuy1m0c0Gdo#v=onepage&q=adaptive%20filtering%20fundamentals%20of%20least%20mean%20squares%20with%20matlab&f=false>`_
-
least_mean_squares
(self, psi, y)[source]¶ Estimate the model parameters using the Least Mean Squares filter.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Haykin, S., & Widrow, B. (Eds.). (2003). Least-mean-square
adaptive filters (Vol. 31). John Wiley & Sons. <https://books.google.com.br/books?hl=pt-BR&lr=&id=U8X3mJtawUkC&oi=fnd&pg=PR9&dq=%22least+mean+square%22&ots=Bzp42ZklVe&sig=ZilhP9bYuuagpi30hrJk53sWj_8&redir_esc=y#v=onepage&q=%22least%20mean%20square%22&f=false>`_
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_sign_error
(self, psi, y)[source]¶ Parameter estimation using the Sign-Error Least Mean Squares filter.
The sign-error LMS algorithm uses the sign of the error vector to change the filter coefficients.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
normalized_least_mean_squares
(self, psi, y)[source]¶ Parameter estimation using the Normalized Least Mean Squares filter.
The normalization is used to avoid numerical instability when updating the estimated parameters.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_normalized_sign_error
(self, psi, y)[source]¶ Parameter estimation using the Normalized Sign-Error LMS filter.
The normalization is used to avoid numerical instability when updating the estimated parameters and the sign of the error vector is used to to change the filter coefficients.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_sign_regressor
(self, psi, y)[source]¶ Parameter estimation using the Sign-Regressor LMS filter.
The sign-regressor LMS algorithm uses the sign of the matrix information to change the filter coefficients.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_normalized_sign_regressor
(self, psi, y)[source]¶ Parameter estimation using the Normalized Sign-Regressor LMS filter.
The normalization is used to avoid numerical instability when updating the estimated parameters and the sign of the information matrix is used to change the filter coefficients.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_sign_sign
(self, psi, y)[source]¶ Parameter estimation using the Sign-Sign LMS filter.
The sign-regressor LMS algorithm uses both the sign of the matrix information and the sign of the error vector to change the filter coefficients.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_normalized_sign_sign
(self, psi, y)[source]¶ Parameter estimation using the Normalized Sign-Sign LMS filter.
The normalization is used to avoid numerical instability when updating the estimated parameters and both the sign of the information matrix and the sign of the error vector are used to change the filter coefficients.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_normalized_leaky
(self, psi, y)[source]¶ Parameter estimation using the Normalized Leaky LMS filter.
When the leakage factor, gama, is set to 0 then there is no leakage in the estimation process.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_leaky
(self, psi, y)[source]¶ Parameter estimation using the Leaky LMS filter.
When the leakage factor, gama, is set to 0 then there is no leakage in the estimation process.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_fourth
(self, psi, y)[source]¶ Parameter estimation using the LMS Fourth filter.
When the leakage factor, gama, is set to 0 then there is no leakage in the estimation process.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Book: Hayes, M. H. (2009). Statistical digital signal processing
and modeling. John Wiley & Sons.
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Manuscript:Gui, G., Mehbodniya, A., & Adachi, F. (2013).
Least mean square/fourth algorithm with application to sparse channel estimation. arXiv preprint arXiv:1304.3911. <https://arxiv.org/pdf/1304.3911.pdf>`_
- [4]`Manuscript: Nascimento, V. H., & Bermudez, J. C. M. (2005, March).
When is the least-mean fourth algorithm mean-square stable? In Proceedings.(ICASSP’05). IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. (Vol. 4, pp. iv-341). IEEE. <http://www.lps.usp.br/vitor/artigos/icassp05.pdf>`_
- [5]`Wikipedia entry on Least Mean Squares
-
least_mean_squares_mixed_norm
(self, psi, y)[source]¶ Parameter estimation using the Mixed-norm LMS filter.
The weight factor controls the proportions of the error norms and offers an extra degree of freedom within the adaptation.
- Parameters
psi (ndarray of floats) – The information matrix of the model.
y_train (array-like of shape = y_training) – The data used to training the model.
- Returns
theta – The estimated parameters of the model.
- Return type
array-like of shape = number_of_model_elements
Notes
A more in-depth documentation of all methods for parameters estimation will be available soon. For now, please refer to the mentioned references.
References
- [1]`Chambers, J. A., Tanrikulu, O., & Constantinides, A. G. (1994).
Least mean mixed-norm adaptive filtering. Electronics letters, 30(19), 1574-1575. <https://ieeexplore.ieee.org/document/326382>`_
- [2]`Dissertation (Portuguese): Zipf, J. G. F. (2011). Classificação,
análise estatística e novas estratégias de algoritmos LMS de passo variável. <https://repositorio.ufsc.br/bitstream/handle/123456789/94953/296734.pdf?sequence=1>`_
- [3]`Wikipedia entry on Least Mean Squares
-
sysidentpy utils¶
Utilities fo data validation
-
sysidentpy.utils._check_arrays.
check_infinity
(X, y)[source]¶ Check that X and y have no NaN or Inf samples.
If there is any NaN or Inf samples a ValueError is raised.
- Parameters
X (ndarray of floats) – The input data.
y (ndarray of floats) – The output data.
-
sysidentpy.utils._check_arrays.
check_nan
(X, y)[source]¶ Check that X and y have no NaN or Inf samples.
If there is any NaN or Inf samples a ValueError is raised.
- Parameters
X (ndarray of floats) – The input data.
y (ndarray of floats) – The output data.
-
sysidentpy.utils._check_arrays.
check_length
(X, y)[source]¶ Check that X and y have the same number of samples.
If the length of X and y are different a ValueError is raised.
- Parameters
X (ndarray of floats) – The input data.
y (ndarray of floats) – The output data.
sysidentpy generate data¶
Utilities for data generation
-
sysidentpy.utils.generate_data.
get_siso_data
(n=5000, colored_noise=False, sigma=0.05, train_percentage=90)[source]¶ Perform the Error Reduction Ration algorithm.
- Parameters
n (int) – The number of samples.
colored_noise (bool) – Select white noise or colored noise (autoregressive noise).
sigma (float) – The standard deviation of the random distribution to generate the noise.
train_percentage (int) – The percentage of the data to be used as train data.
- Returns
x_train, x_valid (array-like) – The input data to be used in identification and validation, respectively.
y_train, y_valid (array-like) – The output data to be used in identification and validation, respectively.
-
sysidentpy.utils.generate_data.
get_miso_data
(n=5000, colored_noise=False, sigma=0.05, train_percentage=90)[source]¶ Perform the Error Reduction Ration algorithm.
- Parameters
n (int) – The number of samples.
colored_noise (bool) – Select white noise or colored noise (autoregressive noise).
sigma (float) – The standard deviation of the random distribution to generate the noise.
train_percentage (int) – The percentage of the data to be used as train data.
- Returns
x_train, x_valid (array-like) – The input data to be used in identification and validation, respectively.
y_train, y_valid (array-like) – The output data to be used in identification and validation, respectively.