--- title: NumPy Evaluation keywords: fastai sidebar: home_sidebar summary: "The most important evaluation signal is the forecast error, which is the difference between the observed value $y_{\\tau}$ and the prediction $\\hat{y}_{\\tau}$, at time $\\tau$: $$e_{\\tau} = y_{\\tau}-\\hat{y}_{\\tau} \\qquad \\qquad \\tau \\in \\{t+1,\\dots,t+H \\}.$$The forecast accuracy summarizes the forecast errors in different metrics:

1. Scale-dependent errors - These metrics are on the same scale as the data.
2. Percentage errors - These metrics are unit-free, suitable for comparisons across series.
3. Scale-independent errors - These metrics measure the relative improvements versus baselines, the available metric is
4. Probabilistic errors - These measure absolute deviation non-symmetrically, that produce under/over estimation. " description: "The most important evaluation signal is the forecast error, which is the difference between the observed value $y_{\\tau}$ and the prediction $\\hat{y}_{\\tau}$, at time $\\tau$: $$e_{\\tau} = y_{\\tau}-\\hat{y}_{\\tau} \\qquad \\qquad \\tau \\in \\{t+1,\\dots,t+H \\}.$$The forecast accuracy summarizes the forecast errors in different metrics:

1. Scale-dependent errors - These metrics are on the same scale as the data.
2. Percentage errors - These metrics are unit-free, suitable for comparisons across series.
3. Scale-independent errors - These metrics measure the relative improvements versus baselines, the available metric is
4. Probabilistic errors - These measure absolute deviation non-symmetrically, that produce under/over estimation. " nb_path: "nbs/losses__numpy.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

1. Scale-dependent Errors

Mean Absolute Error

{% raw %}

mae[source]

mae(y:ndarray, y_hat:ndarray, weights:Optional[ndarray]=None, axis:Optional[int]=None)

Calculates Mean Absolute Error (MAE) between y and y_hat. MAE measures the relative prediction accuracy of a forecasting method by calculating the deviation of the prediction and the true value at a given time and averages these devations over the length of the series.

$$ \mathrm{MAE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} |y_{\tau} - \hat{y}_{\tau}| $$
Parameters
----------
y: numpy array.
    Observed values.
y_hat: numpy array
    Predicted values.
weights: numpy array, optional.
    Weights for weighted average.
axis: None or int, optional.
    Axis or axes along which to average a.
    The default, axis=None, will average over all of the elements of
    the input array. If axis is negative it counts from last to first.

Returns
-------
mae: numpy array or double.
    Return the MAE along the specified axis.
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

Mean Squared Error

{% raw %}

mse[source]

mse(y:ndarray, y_hat:ndarray, weights:Optional[ndarray]=None, axis:Optional[int]=None)

Calculates Mean Squared Error (MSE) between y and y_hat. MSE measures the relative prediction accuracy of a forecasting method by calculating the squared deviation of the prediction and the true value at a given time, and averages these devations over the length of the series.

$$ \mathrm{MSE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} (y_{\tau} - \hat{y}_{\tau})^{2} $$
Parameters
----------
y: numpy array.
    Actual test values.
y_hat: numpy array.
    Predicted values.
weights: numpy array, optional.
    Weights for weighted average.
axis: None or int, optional.
    Axis or axes along which to average a.
    The default, axis=None, will average over all of the
    elements of the input array. If axis is negative it counts
    from the last to the first axis.

Returns
-------
mse: numpy array or double.
    Return the MSE along the specified axis.
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

Root Mean Squared Error

{% raw %}

rmse[source]

rmse(y:ndarray, y_hat:ndarray, weights:Optional[ndarray]=None, axis:Optional[int]=None)

Calculates Root Mean Squared Error (RMSE) between y and y_hat. RMSE measures the relative prediction accuracy of a forecasting method by calculating the squared deviation of the prediction and the observed value at a given time and averages these devations over the length of the series. Finally the RMSE will be in the same scale as the original time series so its comparison with other series is possible only if they share a common scale. RMSE has a direct connection to the L2 norm.

$$ \mathrm{RMSE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \sqrt{\frac{1}{H} \sum^{t+H}_{\tau=t+1} (y_{\tau} - \hat{y}_{\tau})^{2}} $$
Parameters
----------
y: numpy array.
    Observed values.
y_hat: numpy array.
    Predicted values.
weights: numpy array, optional.
    Weights for weighted average.
axis: None or int, optional.
    Axis or axes along which to average a.
    The default, axis=None, will average over all of the elements of
    the input array. If axis is negative it counts from the last to first.

Returns
-------
rmse: numpy array or double.
    Return the RMSE along the specified axis.
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

2. Percentage Errors

Mean Absolute Percentage Error

{% raw %}

mape[source]

mape(y:ndarray, y_hat:ndarray, weights:Optional[ndarray]=None, axis:Optional[int]=None)

Calculates Mean Absolute Percentage Error (MAPE) between y and y_hat. MAPE measures the relative prediction accuracy of a forecasting method by calculating the percentual deviation of the prediction and the observed value at a given time and averages these devations over the length of the series. The closer to zero an observed value is, the higher penalty MAPE loss assigns to the corresponding error.

$$ \mathrm{MAPE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \frac{|y_{\tau}-\hat{y}_{\tau}|}{|y_{\tau}|} $$
Parameters
----------
y: numpy array.
    Observed values.
y_hat: numpy array.
    Predicted values.
weights: numpy array, optional.
    Weights for weighted average.
axis: None or int, optional.
    Axis or axes along which to average a.
    The default, axis=None, will average over all of the elements of
    the input array. If axis is negative it counts from the last to first.

Returns
-------
mape: numpy array or double.
    Return the MAPE along the specified axis.
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

Symmetric Mean Absolute Percentage Error

{% raw %}

smape[source]

smape(y:ndarray, y_hat:ndarray, weights:Optional[ndarray]=None, axis:Optional[int]=None)

Calculates Symmetric Mean Absolute Percentage Error (SMAPE) between y and y_hat. SMAPE measures the relative prediction accuracy of a forecasting method by calculating the relative deviation of the prediction and the observed value scaled by the sum of the absolute values for the prediction and observed value at a given time, then averages these devations over the length of the series. This allows the SMAPE to have bounds between 0% and 200% which is desireble compared to normal MAPE that may be undetermined when the target is zero.

$$ \mathrm{SMAPE}_{2}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \frac{|y_{\tau}-\hat{y}_{\tau}|}{|y_{\tau}|+|\hat{y}_{\tau}|} $$
Parameters
----------
y: numpy array.
    Observed values.
y_hat: numpy array.
    Predicted values.
weights: numpy array, optional.
    Weights for weighted average.
axis: None or int, optional.
    Axis or axes along which to average a.
    The default, axis=None, will average over all of the elements of
    the input array. If axis is negative it counts from the last to first.

Returns
-------
smape: numpy array or double.
    Return the SMAPE along the specified axis.
{% endraw %} {% raw %}
{% endraw %}

3. Scale-independent Errors

Mean Absolute Scaled Error

{% raw %}

mase[source]

mase(y:ndarray, y_hat:ndarray, y_train:ndarray, seasonality:int, weights:Optional[ndarray]=None, axis:Optional[int]=None)

Calculates the Mean Absolute Scaled Error (MASE) between y and y_hat. MASE measures the relative prediction accuracy of a forecasting method by comparinng the mean absolute errors of the prediction and the observed value against the mean absolute errors of the seasonal naive model. The MASE partially composed the Overall Weighted Average (OWA), used in the M4 Competition.

$$ \mathrm{MASE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}, \mathbf{\hat{y}}^{season}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \frac{|y_{\tau}-\hat{y}_{\tau}|}{\mathrm{MAE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}^{season}_{\tau})} $$
Parameters
----------
y: numpy array.
    Observed values.
y_hat: numpy array.
    Predicted values.
y_train: numpy array.
    Actual insample Seasonal Naive predictions.
seasonality: int.
    Main frequency of the time series;
    Hourly 24,  Daily 7, Weekly 52,
    Monthly 12, Quarterly 4, Yearly 1.
weights: numpy array, optional.
    Weights for weighted average.
axis: None or int, optional.
    Axis or axes along which to average a.
    The default, axis=None, will average over all of the elements of
    the input array. If axis is negative it counts from the last to first.

Returns
-------
mase: numpy array or double.
    Return the mase along the specified axis.

References
----------
[1] https://robjhyndman.com/papers/mase.pdf
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

Relative Mean Absolute Error

{% raw %}

rmae[source]

rmae(y:ndarray, y_hat1:ndarray, y_hat2:ndarray, weights:Optional[ndarray]=None, axis:Optional[int]=None)

Calculates Relative Mean Absolute Error (RMAE) between two sets of forecasts (from two different forecasting methods). A number smaller than one implies that the forecast in the numerator is better than the forecast in the denominator.

$$ \mathrm{RMAE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}_{\tau}, \mathbf{\hat{y}}^{base}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \frac{|y_{\tau}-\hat{y}_{\tau}|}{\mathrm{MAE}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}^{base}_{\tau})} $$
Parameters
----------
y: numpy array.
    Observed values.
y_hat1: numpy array.
    Predicted values of first model.
y_hat2: numpy array.
    Predicted values of baseline model.
weights: numpy array, optional.
    Weights for weighted average.
axis: None or int, optional.
    Axis or axes along which to average a.
    The default, axis=None, will average over all of the elements of
    the input array. If axis is negative it counts from the last to first.

Returns
-------
rmae: numpy array or double.
    Return the RMAE along the specified axis.
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

4. Probabilistic Errors

Quantile Loss

{% raw %}

quantile_loss[source]

quantile_loss(y:ndarray, y_hat:ndarray, q:float=0.5, weights:Optional[ndarray]=None, axis:Optional[int]=None)

Computes the quantile loss (QL) between y and y_hat. QL measures the deviation of a quantile forecast. By weighting the absolute deviation in a non symmetric way, the loss pays more attention to under or over estimation. A common value for q is 0.5 for the deviation from the median.

$$ \mathrm{QL}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}^{(q)}_{\tau}) = \frac{1}{H} \sum^{t+H}_{\tau=t+1} \Big( (1-q)\,( \hat{y}^{(q)}_{\tau} - y_{\tau} )_{+} + q\,( y_{\tau} - \hat{y}^{(q)}_{\tau} )_{+} \Big) $$
Parameters
----------
y: numpy array.
    Observed values.
y_hat: numpy array.
    Predicted values.
q: float.
    Quantile for the predictions' comparison.
weights: numpy array, optional.
    Weights for weighted average.
axis: None or int, optional.
    Axis or axes along which to average a.
    The default, axis=None, will average over all of the elements of
    the input array. If axis is negative it counts from the last to first.

Returns
-------
quantile_loss: numpy array or double.
    Return the QL along the specified axis.
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}

Multi-Quantile Loss

{% raw %}

mqloss[source]

mqloss(y:ndarray, y_hat:ndarray, quantiles:ndarray, weights:Optional[ndarray]=None, axis:Optional[int]=None)

Calculates the Multi-Quantile loss (MQL) between y and y_hat. MQL calculates the average multi-quantile Loss for a given set of quantiles, based on the absolute difference between predicted quantiles and observed values.

$$ \mathrm{MQL}(\mathbf{y}_{\tau}, [\mathbf{\hat{y}}^{(q_{1})}_{\tau}, ... ,\hat{y}^{(q_{n})}_{\tau}]) = \frac{1}{n} \sum_{q_{i}} \mathrm{QL}(\mathbf{y}_{\tau}, \mathbf{\hat{y}}^{(q_{i})}_{\tau}) $$

The limit behavior of MQL allows to measure the accuracy of a full predictive distribution $\mathbf{\hat{F}}_{\tau}$ with the continuous ranked probability score (CRPS). This can be achieved through a numerical integration technique, that discretizes the quantiles and treats the CRPS integral with a left Riemann approximation, averaging over uniformly distanced quantiles.

$$ \mathrm{CRPS}(y_{\tau}, \mathbf{\hat{F}}_{\tau}) = \int^{1}_{0} \mathrm{QL}(y_{\tau}, \hat{y}^{(q)}_{\tau}) dq $$
Parameters
----------
y: numpy array.
    Observed values.
y_hat: numpy array.
    Predicted values.
quantiles: numpy array.
    Quantiles to compare against.
weights: numpy array, optional.
    Weights for weighted average.
axis: None or int, optional.
    Axis or axes along which to average a.
    The default, axis=None, will average over all of the elements of
    the input array. If axis is negative it counts from the last to first.

Returns
-------
mqloss: numpy array or double.
    Return the MQL along the specified axis.

References
----------
[1] https://www.jstor.org/stable/2629907
{% endraw %} {% raw %}
{% endraw %} {% raw %}
{% endraw %}