--- title: Analysis keywords: fastai sidebar: home_sidebar summary: "This contains fastai Learner extensions useful to perform prediction analysis." description: "This contains fastai Learner extensions useful to perform prediction analysis." nb_path: "nbs/052b_analysis.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %} {% raw %}

Learner.show_probas[source]

Learner.show_probas(figsize=(6, 6), ds_idx=1, dl=None, one_batch=False, max_n=None, nrows=1, ncols=1, imsize=3, suptitle=None, sharex=False, sharey=False, squeeze=True, subplot_kw=None, gridspec_kw=None)

{% endraw %} {% raw %}
{% endraw %} {% raw %}

Learner.plot_confusion_matrix[source]

Learner.plot_confusion_matrix(ds_idx=1, dl=None, thr=0.5, normalize=False, title='Confusion matrix', cmap='Blues', norm_dec=2, figsize=(6, 6), title_fontsize=16, fontsize=12, plot_txt=True, **kwargs)

Plot the confusion matrix, with title and using cmap.

{% endraw %} {% raw %}
{% endraw %} {% raw %}

Learner.feature_importance[source]

Learner.feature_importance(X=None, y=None, partial_n=None, feature_names=None, key_metric_idx=0, show_chart=True, save_df_path=False, random_state=23)

Calculates feature importance defined to be the change in a model validation loss or metric when a single feature value is randomly shuffled

This procedure breaks the relationship between the feature and the target, thus the change in the model validation loss or metric is indicative of how much the model depends on the feature.

Args: X: array-like object containing the time series data for which importance will be measured. If None, all data in the validation set will be used. y: array-like object containing the targets. If None, all targets in the validation set will be used. partial_n: number of samples (if int) or percent of the validation set (if float) that will be used to measure feature importance. If None, all data will be used. feature_names (Optional[list(str)]): list of feature names that will be displayed if available. Otherwise they will be var_0, var_1, etc. key_metric_idx (Optional[int]): integer to select the metric used in the calculation. If None or no metric is available, the change is calculated using the validation loss. show_chart (bool): flag to indicate if a chart showing permutation feature importance will be plotted. save_df_path (str): path to saved dataframe containing the permutation feature importance results. random_state (int): controls the shuffling applied to the data. Pass an int for reproducible output across multiple function calls.

{% endraw %} {% raw %}
{% endraw %} {% raw %}
from tsai.data.external import get_UCR_data
from tsai.data.preprocessing import TSRobustScale
from tsai.learner import ts_learner
from tsai.models.FCNPlus import FCNPlus
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, split_data=False)
tfms  = [None, [TSClassification()]]
batch_tfms = TSRobustScale()
dls = get_ts_dls(X, y, splits=splits, sel_vars=[0, 3, 5, 8, 10], sel_steps=slice(-30, None), tfms=tfms, batch_tfms=batch_tfms)
learn = ts_learner(dls, FCNPlus, metrics=accuracy, train_metrics=True)
learn.fit_one_cycle(2)
learn.plot_metrics()
learn.show_probas()
learn.plot_confusion_matrix()
epoch train_loss train_accuracy valid_loss valid_accuracy time
0 1.742494 0.210938 1.628715 0.261111 00:01
1 1.560428 0.546875 1.476521 0.350000 00:01
{% endraw %} {% raw %}
learn.feature_importance();
Selected metric: accuracy
Computing feature importance...
100.00% [24/24 00:03<00:00]
  1 feature: var_0                accuracy: 0.166667
  4 feature: var_3                accuracy: 0.177778
  6 feature: var_5                accuracy: 0.172222
  9 feature: var_8                accuracy: 0.177778
 11 feature: var_10               accuracy: 0.177778

{% endraw %} {% raw %}
learn.feature_importance(X=X[splits[1]], y=y[splits[1]]);
Selected metric: accuracy
Computing feature importance...
100.00% [24/24 00:03<00:00]
  1 feature: var_0                accuracy: 0.166667
  4 feature: var_3                accuracy: 0.177778
  6 feature: var_5                accuracy: 0.172222
  9 feature: var_8                accuracy: 0.177778
 11 feature: var_10               accuracy: 0.177778

{% endraw %} {% raw %}
learn.feature_importance(partial_n=.1);
Selected metric: accuracy
Computing feature importance...
100.00% [24/24 00:02<00:00]
  1 feature: var_0                accuracy: 0.166667
  4 feature: var_3                accuracy: 0.222222
  6 feature: var_5                accuracy: 0.166667
  9 feature: var_8                accuracy: 0.222222
 11 feature: var_10               accuracy: 0.222222

{% endraw %} {% raw %}
learn.feature_importance(partial_n=10);
Selected metric: accuracy
Computing feature importance...
100.00% [24/24 00:02<00:00]
  1 feature: var_0                accuracy: 0.200000
  4 feature: var_3                accuracy: 0.200000
  6 feature: var_5                accuracy: 0.200000
  9 feature: var_8                accuracy: 0.200000
 11 feature: var_10               accuracy: 0.200000

{% endraw %}