--- title: Analysis keywords: fastai sidebar: home_sidebar summary: "This contains fastai Learner extensions useful to perform prediction analysis." description: "This contains fastai Learner extensions useful to perform prediction analysis." nb_path: "nbs/052b_analysis.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}

Learner.show_probas[source]

Learner.show_probas(figsize=(6, 6), ds_idx=1, dl=None, one_batch=False, max_n=None, nrows=1, ncols=1, imsize=3, suptitle=None, sharex=False, sharey=False, squeeze=True, subplot_kw=None, gridspec_kw=None)

{% endraw %} {% raw %}
{% endraw %} {% raw %}

Learner.plot_confusion_matrix[source]

Learner.plot_confusion_matrix(ds_idx=1, dl=None, thr=0.5, normalize=False, title='Confusion matrix', cmap='Blues', norm_dec=2, figsize=(6, 6), title_fontsize=16, fontsize=12, plot_txt=True, **kwargs)

Plot the confusion matrix, with title and using cmap.

{% endraw %} {% raw %}
{% endraw %}

Permutation importance

We've also introduced 2 methods to help you better understand how important certain features or certain steps are for your model. Both methods use permutation importance.

⚠️The permutation feature or step importance is defined as the decrease in a model score when a single feature or step value is randomly shuffled.

So if you using accuracy (higher is better), the most important features or steps will be those with a lower value on the chart (as randomly shuffling them reduces performance).

The opposite occurs for metrics like mean squared error (lower is better). In this case, the most important features or steps will be those with a higher value on the chart.

There are 2 issues with step importance:

  • there may be many steps and the analysis could take very long
  • steps will likely have a high autocorrelation

For those reasons, we've introduced an argument (n_steps) to group steps. In this way you'll be able to know which part of the time series is the most important.

Feature importance has been adapted from https://www.kaggle.com/cdeotte/lstm-feature-importance by Chris Deotte (Kaggle GrandMaster).

{% raw %}

Learner.feature_importance[source]

Learner.feature_importance(X=None, y=None, partial_n:(int, float)=None, method:str='permutation', feature_names:list=None, sel_classes:(str, list)=None, key_metric_idx:int=0, show_chart:bool=True, figsize:tuple=(10, 5), title:str=None, return_df:bool=True, save_df_path:Path=None, random_state:int=23, verbose:bool=True)

Calculates feature importance as the drop in the model's validation loss or metric when a feature value is randomly shuffled

Type Default Details
X NoneType `` array-like object containing the time series. If None, all data in the validation set will be used.
y NoneType `` array-like object containing the targets. If None, all targets in the validation set will be used.
partial_n (int, float) `` # (int) or % (float) of used to measure feature importance. If None, all data will be used.
method str permutation Method used to invalidate feature. Use 'permutation' for shuffling or 'ablation' for setting values to np.nan.
feature_names list `` Optional list of feature names that will be displayed if available. Otherwise var_0, var_1, etc.
sel_classes (str, list) `` classes for which the analysis will be made
key_metric_idx int 0 Optional position of the metric used. If None or no metric is available, the loss will be used.
show_chart bool True Flag to indicate if a chart showing permutation feature importance will be plotted.
figsize tuple (10, 5) Size of the chart.
title str `` Optional string that will be used as the chart title. If None 'Permutation Feature Importance'.
return_df bool True Flag to indicate if the dataframe with feature importance will be returned.
save_df_path Path `` Path where dataframe containing the permutation feature importance results will be saved.
random_state int 23 Optional int that controls the shuffling applied to the data.
verbose bool True Flag that controls verbosity.
{% endraw %} {% raw %}
{% endraw %} {% raw %}

Learner.step_importance[source]

Learner.step_importance(X=None, y=None, partial_n:(int, float)=None, method:str='permutation', step_names:list=None, sel_classes:(str, list)=None, n_steps:int=1, key_metric_idx:int=0, show_chart:bool=True, figsize:tuple=(10, 5), title:str=None, xlabel=None, return_df:bool=True, save_df_path:Path=None, random_state:int=23, verbose:bool=True)

Calculates step importance as the drop in the model's validation loss or metric when a step/s value/s is/are randomly shuffled

Type Default Details
X NoneType `` array-like object containing the time series. If None, all data in the validation set will be used.
y NoneType `` array-like object containing the targets. If None, all targets in the validation set will be used.
partial_n (int, float) `` # (int) or % (float) of used to measure feature importance. If None, all data will be used.
method str permutation Method used to invalidate feature. Use 'permutation' for shuffling or 'ablation' for setting values to np.nan.
step_names list `` Optional list of step names that will be displayed if available. Otherwise 0, 1, 2, etc.
sel_classes (str, list) `` classes for which the analysis will be made
n_steps int 1 # of steps that will be analyzed at a time. Default is 1.
key_metric_idx int 0 Optional position of the metric used. If None or no metric is available, the loss will be used.
show_chart bool True Flag to indicate if a chart showing permutation feature importance will be plotted.
figsize tuple (10, 5) Size of the chart.
title str `` Optional string that will be used as the chart title. If None 'Permutation Feature Importance'.
xlabel NoneType `` Optional string that will be used as the chart xlabel. If None 'steps'.
return_df bool True Flag to indicate if the dataframe with feature importance will be returned.
save_df_path Path `` Path where dataframe containing the permutation feature importance results will be saved.
random_state int 23 Optional int that controls the shuffling applied to the data.
verbose bool True Flag that controls verbosity.
{% endraw %} {% raw %}
{% endraw %} {% raw %}
from tsai.data.external import get_UCR_data
from tsai.data.preprocessing import TSRobustScale, TSStandardize
from tsai.learner import ts_learner
from tsai.models.FCNPlus import FCNPlus
from tsai.metrics import accuracy
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, split_data=False)
tfms  = [None, [TSClassification()]]
batch_tfms = TSRobustScale()
batch_tfms = TSStandardize()
dls = get_ts_dls(X, y, splits=splits, sel_vars=[0, 3, 5, 8, 10], sel_steps=slice(-30, None), tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(dls, FCNPlus, metrics=accuracy, train_metrics=True)
learn.fit_one_cycle(2)
# learn.plot_metrics()
# learn.show_probas()
# learn.plot_confusion_matrix()
[0, 1.8857280015945435, 0.078125, 1.7139892578125, 0.3222222328186035, '00:01']
[1, 1.6873503923416138, 0.4609375, 1.46601140499115, 0.3611111044883728, '00:01']
{% endraw %} {% raw %}
learn.feature_importance()
X.shape: (180, 24, 51)
y.shape: (180,)
Selected metric: accuracy
Computing feature importance (permutation method)...
100.00% [6/6 00:03<00:00]
  0 feature: BASELINE             accuracy: 0.361111
  0 feature: var_0                accuracy: 0.344444
  3 feature: var_3                accuracy: 0.277778
  5 feature: var_5                accuracy: 0.344444
  8 feature: var_8                accuracy: 0.366667
 10 feature: var_10               accuracy: 0.333333

Feature accuracy accuracy_change
0 var_3 0.277778 0.083333
1 var_10 0.333333 0.027778
2 var_0 0.344444 0.016667
3 var_5 0.344444 0.016667
4 BASELINE 0.361111 -0.000000
5 var_8 0.366667 -0.005556
{% endraw %} {% raw %}
learn.step_importance(n_steps=5);
X.shape: (180, 24, 51)
y.shape: (180,)
Selected metric: accuracy
Computing step importance...
100.00% [7/7 00:04<00:00]
  0 step: BASELINE             accuracy: 0.361111
  1 step: 21 to 25             accuracy: 0.344444
  2 step: 26 to 30             accuracy: 0.344444
  3 step: 31 to 35             accuracy: 0.338889
  4 step: 36 to 40             accuracy: 0.283333
  5 step: 41 to 45             accuracy: 0.305556
  6 step: 46 to 50             accuracy: 0.350000

{% endraw %}

You may pass an X and y if you want to analyze a particular group of samples:

learn.feature_importance(X=X[splits[1]], y=y[splits[1]])

If you have a large validation dataset, you may also use the partial_n argument to select a fixed amount of samples (integer) or a percentage of the validation dataset (float):

learn.feature_importance(partial_n=.1)
learn.feature_importance(partial_n=100)