losses Module

losses module classes to load common loss functions.

DiceLoss

class farabio.utils.losses.DiceLoss(weight=None, size_average=True)[source]

The Dice coefficient, or Dice-Sørensen coefficient.

It is a common metric for pixel segmentation that can also be modified to act as a loss function.

\[D S C=\frac{2|X \cap Y|}{|X|+|Y|}\]

Examples

>>> dice_loss = DiceLoss()
>>> dice_loss(outputs, targets)
__init__(weight=None, size_average=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs, targets, smooth=1)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

DiceBCELoss

class farabio.utils.losses.DiceBCELoss(weight=None, size_average=True)[source]

Dice combined with BCE

This loss combines Dice loss with the standard binary cross-entropy (BCE) loss that is generally the default for segmentation models. Combining the two methods allows for some diversity in the loss, while benefitting from the stability of BCE. The equation for multi-class BCE by itself will be familiar to anyone who has studied logistic regression.

\[J(\mathbf{w})=\frac{1}{N} \sum_{n=1}^{N} H\left(p_{ n}, q_{n}\right)=-\frac{1}{N} \sum_{n=1}^{N}\left[y_{n} \log \hat{y}_{n}+\left(1-y_{n}\right) \log \left(1-\hat{y}_{n}\right)\right]\]

Examples

>>> dice_bce_loss = DiceBCELoss()
>>> dice_bce_loss(outputs, targets)
__init__(weight=None, size_average=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs, targets, smooth=1)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

IoULoss

class farabio.utils.losses.IoULoss(weight=None, size_average=True)[source]

The IoU metric, or Jaccard Index.

It is similar to the Dice metric and is calculated as the ratio between the overlap of the positive instances between two sets, and their mutual combined values.

\[J(A, B)=\frac{|A \cap B|}{|A \cup B|}=\frac{|A \cap B|}{|A|+|B|-|A \cap B|}\]

Examples

>>> iou_loss = IoULoss()
>>> iou_loss(outputs, targets)
__init__(weight=None, size_average=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs, targets, smooth=1)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

FocalLoss class Block quotes consist of indented body elements:

My theory by A. Elk. Brackets Miss, brackets. This theory goes as follows and begins now. All brontosauruses are thin at one end, much much thicker in the middle and then thin again at the far end. That is my theory, it is mine, and belongs to me and I own it, and what it is too.

—Anne Elk (Miss)

FocalLoss

class farabio.utils.losses.FocalLoss(weight=None, size_average=True)[source]

Focal Loss from [RetinaNet]

Notes

\[\mathrm{FL}\left(p_{t}\right)=-\alpha_{t}\left(1-p_{t}\right)^{\gamma} \log \left(p_{t}\right)\]

where: \(p_t\) is the model’s estimated probability for each class.

It was introduced by Facebook AI Research in 2017 to combat extremely imbalanced datasets where positive cases were relatively rare.

Figure excerpt from [amaarora]:

../_images/focal_loss.png

With the help of hyperparameters, \(\alpha\) and \(\gamma\).

The focusing parameter \(\gamma\) smoothly adjusts the rate at which easy examples are down-weighted. When \(\gamma = 0\), focal loss is equivalent to categorical cross-entropy, and as \(\gamma\) is increased, the effect of the modulating factor is likewise increased (\(\gamma = 2\) works best in experiments).

And, \(\alpha\) is a weighting factor. If the \(\alpha = 1\), then class 1 and class 0 (in binary case) have same weights, so \(\alpha\) balances the importance of positive/negative examples in this way.

References

RetinaNet

https://arxiv.org/abs/1708.02002

amaarora

https://amaarora.github.io/2020/06/29/FocalLoss.html

Examples

>>> focal_loss = FocalLoss()
>>> focal_loss(outputs, targets)
__init__(weight=None, size_average=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs, targets, alpha=0.8, gamma=0.2, smooth=1)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

TverskyLoss

class farabio.utils.losses.TverskyLoss(weight=None, size_average=True)[source]

Tversky loss from [1]

Notes

\[\mathrm{S}(P, G, \alpha ; \beta)=\frac{|P G|}{|P G|+\alpha|P G|+\beta|G P|}\]
where:
  • \(P\) and \(G\) are the predicted and ground truth binary labels.

  • \(\alpha\) and \(\beta\) control the magnitude of the penalties for FPs and FNs, respectively.

Notes:
  • \(\alpha = \beta = 0.5\) => dice coeff

  • \(\alpha = \beta = 1\) => tanimoto coeff

  • \(\alpha + \beta = 1\) => F beta coeff

References

1

https://arxiv.org/abs/1706.05721

Examples

>>> tversky_loss = TverskyLoss()
>>> tversky_loss(outputs, targets)
__init__(weight=None, size_average=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs, targets, smooth=1, alpha=0.5, beta=0.5)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

FocalTverskyLoss

class farabio.utils.losses.FocalTverskyLoss(weight=None, size_average=True)[source]

A variant on the Tversky loss.

It also includes the gamma modifier from Focal Loss from [1].

References

1

https://arxiv.org/abs/1810.07842

Examples

>>> focal_tversky_loss = FocalTverskyLoss()
>>> focal_tversky_loss(outputs, targets)
__init__(weight=None, size_average=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs, targets, smooth=1, alpha=0.5, beta=0.5, gamma=1)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool

LovaszHingeLoss

class farabio.utils.losses.LovaszHingeLoss(weight=None, size_average=True)[source]

The Lovász-Softmax loss.

A tractable surrogate for the optimization of the intersection-over-union measure in neural networks from [1].

References

1

https://arxiv.org/abs/1705.08790

Examples

>>> lovasz_hinge_loss = LovaszHingeLoss()
>>> lovasz_hinge_loss(outputs, targets)
__init__(weight=None, size_average=True)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs, targets)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool