--- title: Losses keywords: fastai sidebar: home_sidebar summary: "Implements custom loss functions." description: "Implements custom loss functions." nb_path: "nbs/05_losses.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
{% endraw %}

Weighted Softmax Cross Entropy Loss

as described by Falk, Thorsten, et al. "U-Net: deep learning for cell counting, detection, and morphometry." Nature methods 16.1 (2019): 67-70.

  • axis for softmax calculations. Defaulted at 1 (channel dimension).
  • reduction will be used when we call Learner.get_preds
  • activation function will be applied on the raw output logits of the model when calling Learner.get_preds or Learner.predict
  • decodes function converts the output of the model to a format similar to the target (here binary masks). This is used in Learner.predict
{% raw %}

class WeightedSoftmaxCrossEntropy[source]

WeightedSoftmaxCrossEntropy(*args, axis=-1, reduction='mean') :: Module

Weighted Softmax Cross Entropy loss functions

{% endraw %} {% raw %}
{% endraw %}

In a segmentation task, we want to take the softmax over the channel dimension

{% raw %}
torch.manual_seed(0)
tst = WeightedSoftmaxCrossEntropy(axis=1)
output = TensorImage(torch.randn(4, 5, 356, 356, requires_grad=True))
targets = TensorMask(torch.ones(4, 356, 356).long())
weights = torch.randn(4, 356, 356)
loss = tst(output, targets, weights)
test_eq(loss.detach().numpy(), -0.002415925730019808)
test_eq(tst.activation(output), F.softmax(output, dim=1))
test_eq(tst.decodes(output), output.argmax(dim=1))
{% endraw %}

Kornia Segmentation Losses Integration

Helper functions to load segmentation losses from kornia. Read the docs for a detailed explanation.

{% raw %}

load_kornia_loss[source]

load_kornia_loss(loss_name, alpha=0.5, beta=0.5, gamma=2.0, reduction='mean', eps=1e-08)

Load segmentation_models_pytorch model

{% endraw %} {% raw %}
{% endraw %} {% raw %}
output = TensorImage(torch.randn(4, 5, 356, 356, requires_grad=True))
targets = TensorMask(torch.ones(4, 356, 356).long())
tst = load_kornia_loss("TverskyLoss", alpha=0.5, beta=0.5) # equals dice loss
loss = tst(output, targets)
tst2 = load_kornia_loss("DiceLoss")
loss2 = tst2(output, targets)
test_eq(loss.detach().numpy(), loss2.detach().numpy())

tst3 = load_kornia_loss("FocalLoss")
loss3 = tst3(output, targets)
{% endraw %}