--- title: SparsifyCallback keywords: fastai sidebar: home_sidebar summary: "Use the sparsifier in fastai Callback system" description: "Use the sparsifier in fastai Callback system" nb_path: "nbs/02_sparsify_callback.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
 
{% endraw %} {% raw %}
{% endraw %} {% raw %}
path = untar_data(URLs.PETS)
files = get_image_files(path/"images")

def label_func(f): return f[0].isupper()
{% endraw %} {% raw %}
dls = ImageDataLoaders.from_name_func(path, files, label_func, item_tfms=Resize(64))
{% endraw %} {% raw %}

class SparsifyCallback[source]

SparsifyCallback(end_sparsity, granularity, method, criteria, sched_func, start_sparsity=0, start_epoch=0, end_epoch=None, lth=False, rewind_epoch=0, reset_end=False, model=None, round_to=None, save_tickets=False, layer_type=Conv2d) :: Callback

Basic class handling tweaks of the training loop by changing a Learner in various events

{% endraw %} {% raw %}
{% endraw %}

The most important part of our Callback happens in before_batch. There, we first compute the sparsity of our network according to our schedule and then we remove the parameters accordingly.

{% raw %}
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
/home/HubensN/miniconda3/envs/deep/lib/python3.8/site-packages/fastai/vision/learner.py:265: UserWarning: `cnn_learner` has been renamed to `vision_learner` -- please update your code
  warn("`cnn_learner` has been renamed to `vision_learner` -- please update your code")
{% endraw %} {% raw %}
learn.fit_one_cycle(5)
epoch train_loss valid_loss accuracy time
0 0.666736 1.887411 0.753721 00:11
1 0.437540 0.276486 0.881597 00:10
2 0.258372 0.291492 0.878890 00:10
3 0.146018 0.200280 0.924222 00:10
4 0.075150 0.212859 0.925575 00:10
{% endraw %}

Let's now try adding some sparsity in our model

{% raw %}
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
{% endraw %}

The SparsifyCallback requires a new argument compared to the Sparsifier. Indeed, we need to know the pruning schedule that we should follow during training in order to prune the parameters accordingly.

You can use any scheduling function already available in fastai or come up with your own ! For more information about the pruning schedules, take a look at the Schedules section.

{% raw %}
sp_cb = SparsifyCallback(end_sparsity=50, granularity='weight', method='local', criteria=large_final, sched_func=sched_cos)
{% endraw %} {% raw %}
learn.fit_one_cycle(5, cbs=sp_cb)
Pruning of weight until a sparsity of [50]%
Saving Weights at epoch 0
epoch train_loss valid_loss accuracy time
0 0.672893 0.457210 0.832882 00:10
1 0.406809 0.249998 0.899188 00:10
2 0.234348 0.461690 0.835589 00:10
3 0.132843 0.218986 0.920839 00:10
4 0.080099 0.211927 0.919486 00:10
Sparsity at the end of epoch 0: [4.77]%
Sparsity at the end of epoch 1: [17.27]%
Sparsity at the end of epoch 2: [32.73]%
Sparsity at the end of epoch 3: [45.23]%
Sparsity at the end of epoch 4: [50.0]%
Final Sparsity: [50.0]%
Sparsity in Conv2d 2: 50.00%
Sparsity in Conv2d 8: 50.00%
Sparsity in Conv2d 11: 50.00%
Sparsity in Conv2d 14: 50.00%
Sparsity in Conv2d 17: 50.00%
Sparsity in Conv2d 21: 50.00%
Sparsity in Conv2d 24: 50.00%
Sparsity in Conv2d 27: 50.00%
Sparsity in Conv2d 30: 50.00%
Sparsity in Conv2d 33: 50.00%
Sparsity in Conv2d 37: 50.00%
Sparsity in Conv2d 40: 50.00%
Sparsity in Conv2d 43: 50.00%
Sparsity in Conv2d 46: 50.00%
Sparsity in Conv2d 49: 50.00%
Sparsity in Conv2d 53: 50.00%
Sparsity in Conv2d 56: 50.00%
Sparsity in Conv2d 59: 50.00%
Sparsity in Conv2d 62: 50.00%
Sparsity in Conv2d 65: 50.00%
{% endraw %}

Surprisingly, our network that is composed of $50 \%$ of zeroes performs reasonnably well when compared to our plain and dense network.

The SparsifyCallback also accepts a list of sparsities, corresponding to each layer of layer_type to be pruned. Below, we show how to prune only the intermediate layers of ResNet-18.

{% raw %}
learn = cnn_learner(dls, resnet18, metrics=accuracy)
learn.unfreeze()
{% endraw %} {% raw %}
sparsities = [0, 0, 0, 0, 0, 0, 50, 50, 50, 50, 50, 50, 50, 50, 0, 0, 0, 0, 0, 0]
{% endraw %} {% raw %}
sp_cb = SparsifyCallback(end_sparsity=sparsities, granularity='weight', method='local', criteria=large_final, sched_func=sched_cos)
{% endraw %} {% raw %}
learn.fit_one_cycle(5, cbs=sp_cb)
Pruning of weight until a sparsity of [0, 0, 0, 0, 0, 0, 50, 50, 50, 50, 50, 50, 50, 50, 0, 0, 0, 0, 0, 0]%
Saving Weights at epoch 0
epoch train_loss valid_loss accuracy time
0 0.663173 0.546195 0.790934 00:10
1 0.403149 0.366974 0.876184 00:10
2 0.237900 0.250653 0.904601 00:11
3 0.138620 0.214972 0.924899 00:11
4 0.067375 0.212548 0.920839 00:10
Sparsity at the end of epoch 0: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 4.77, 4.77, 4.77, 4.77, 4.77, 4.77, 4.77, 4.77, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
Sparsity at the end of epoch 1: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 17.27, 17.27, 17.27, 17.27, 17.27, 17.27, 17.27, 17.27, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
Sparsity at the end of epoch 2: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 32.73, 32.73, 32.73, 32.73, 32.73, 32.73, 32.73, 32.73, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
Sparsity at the end of epoch 3: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 45.23, 45.23, 45.23, 45.23, 45.23, 45.23, 45.23, 45.23, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
Sparsity at the end of epoch 4: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 50.0, 50.0, 50.0, 50.0, 50.0, 50.0, 50.0, 50.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
Final Sparsity: [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 50.0, 50.0, 50.0, 50.0, 50.0, 50.0, 50.0, 50.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]%
Sparsity in Conv2d 2: 0.00%
Sparsity in Conv2d 8: 0.00%
Sparsity in Conv2d 11: 0.00%
Sparsity in Conv2d 14: 0.00%
Sparsity in Conv2d 17: 0.00%
Sparsity in Conv2d 21: 0.00%
Sparsity in Conv2d 24: 50.00%
Sparsity in Conv2d 27: 50.00%
Sparsity in Conv2d 30: 50.00%
Sparsity in Conv2d 33: 50.00%
Sparsity in Conv2d 37: 50.00%
Sparsity in Conv2d 40: 50.00%
Sparsity in Conv2d 43: 50.00%
Sparsity in Conv2d 46: 50.00%
Sparsity in Conv2d 49: 0.00%
Sparsity in Conv2d 53: 0.00%
Sparsity in Conv2d 56: 0.00%
Sparsity in Conv2d 59: 0.00%
Sparsity in Conv2d 62: 0.00%
Sparsity in Conv2d 65: 0.00%
{% endraw %}

On top of that, the SparsifyCallbackcan also take many optionnal arguments:

  • start_sparsity: the sparsity that the schedule will use as a starting point (default to 0)
  • start_epoch: the epoch at which the schedule will start pruning (default to 0)
  • end_epoch: the epoch at which the schedule will stop pruning (default to the training epochs passed in fit)
  • lth: whether training using the Lottery Ticket Hypothesis, i.e. reset the weights to their original value at each pruning step (more information in the Lottery Ticket Hypothesis section)
  • rewind_epoch: the epoch used as a reference for the Lottery Ticket Hypothesis with Rewinding (default to 0)
  • reset_end: whether you want to reset the weights to their original values after training (pruning masks are still applied)
  • model: pass a model or a part of the model if you don't want to apply pruning on the whole model trained.
  • layer_type: specify the type of layer that you want to apply pruning to (default to nn.Conv2d)

For example, we correctly pruned the convolution layers of our model, but we could imagine pruning the Linear Layers of even only the BatchNorm ones !