--- title: Batch Norm Folding keywords: fastai sidebar: home_sidebar summary: "Fold the batchnorm and the conv layers together to reduce computation" description: "Fold the batchnorm and the conv layers together to reduce computation" nb_path: "nbs/06_bn_folding.ipynb" ---
Batch Normalization is a technique which takes care of normalizing the input of each layer to make the training process faster and more stable. In practice, it is an extra layer that we generally add after the computation layer and before the non-linearity.
It consists of 2 steps:
Due to its efficiency for training neural networks, batch normalization is now widely used. But how useful is it at inference time?
Once the training has ended, each batch normalization layer possesses a specific set of $\gamma$ and $\beta$, but also $\mu$ and $\sigma$, the latter being computed using an exponentially weighted average during training. It means that during inference, the batch normalization acts as a simple linear transformation of what comes out of the previous layer, often a convolution.
As a convolution is also a linear transformation, it also means that both operations can be merged into a single linear transformation!
This would remove some unnecessary parameters but also reduce the number of operations to be performed at inference time.
With a little bit of math, we can easily rearrange the terms of the convolution to take the batch normalization into account.
As a little reminder, the convolution operation followed by the batch normalization operation can be expressed, for an input $x$, as:
{% raw %} $$\begin{aligned} z &=W * x+b \\ \mathrm{out} &=\gamma \cdot \frac{z-\mu}{\sqrt{\sigma^{2}+\epsilon}}+\beta \end{aligned}$$ {% endraw %}
So, if we re-arrange the $W$ and $b$ of the convolution to take the parameters of the batch normalization into account, as such:
{% raw %} $$\begin{aligned} w_{\text {fold }} &=\gamma \cdot \frac{W}{\sqrt{\sigma^{2}+\epsilon}} \\ b_{\text {fold }} &=\gamma \cdot \frac{b-\mu}{\sqrt{\sigma^{2}+\epsilon}}+\beta \end{aligned}$$ {% endraw %}
This is how to do it with fasterai !
path = untar_data(URLs.PETS)
files = get_image_files(path/"images")
def label_func(f): return f[0].isupper()
dls = ImageDataLoaders.from_name_func(path, files, label_func, item_tfms=Resize(64))
learn = Learner(dls, resnet18(num_classes=2), metrics=accuracy)
learn.fit_one_cycle(5)
bn = BN_Folder()
new_model = bn.fold(learn.model)
The batch norm layers have been replaced by an Identity layer, and the weights of the convolutions have been modified accordingly.
new_model
We can see that the new model possess fewer parameters
count_parameters(learn.model)
count_parameters(new_model)
But is also faster to run !
x,y = dls.one_batch()
%%timeit
learn.model(x[0][None].cuda())
%%timeit
new_model(x[0][None].cuda())
But most importantly, has the exact same perfomance as before:
new_learn = Learner(dls, new_model, metrics=accuracy)
new_learn.validate()