super_gradients.training.losses package
Submodules
super_gradients.training.losses.all_losses module
super_gradients.training.losses.ddrnet_loss module
- class super_gradients.training.losses.ddrnet_loss.DDRNetLoss(threshold: float = 0.7, ohem_percentage: float = 0.1, weights: list = [1.0, 0.4], ignore_label=255)[source]
Bases:
super_gradients.training.losses.ohem_ce_loss.OhemCELoss
- forward(predictions_list: Union[list, tuple, torch.Tensor], targets: torch.Tensor)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reduction: str
super_gradients.training.losses.focal_loss module
- class super_gradients.training.losses.focal_loss.FocalLoss(loss_fcn: torch.nn.modules.loss.BCEWithLogitsLoss, gamma=1.5, alpha=0.25)[source]
Bases:
torch.nn.modules.loss._Loss
Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5)
- forward(pred, true)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reduction: str
super_gradients.training.losses.label_smoothing_cross_entropy_loss module
- class super_gradients.training.losses.label_smoothing_cross_entropy_loss.LabelSmoothingCrossEntropyLoss(weight=None, ignore_index=- 100, reduction='mean', smooth_eps=None, smooth_dist=None, from_logits=True)[source]
Bases:
torch.nn.modules.loss.CrossEntropyLoss
CrossEntropyLoss - with ability to recieve distrbution as targets, and optional label smoothing
- forward(input, target, smooth_dist=None)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- ignore_index: int
- super_gradients.training.losses.label_smoothing_cross_entropy_loss.cross_entropy(inputs, target, weight=None, ignore_index=- 100, reduction='mean', smooth_eps=None, smooth_dist=None, from_logits=True)[source]
cross entropy loss, with support for target distributions and label smoothing https://arxiv.org/abs/1512.00567
- super_gradients.training.losses.label_smoothing_cross_entropy_loss.onehot(indexes, N=None, ignore_index=None)[source]
Creates a one-hot representation of indexes with N possible entries if N is not specified, it will suit the maximum index appearing. indexes is a long-tensor of indexes ignore_index will be zero in onehot representation
super_gradients.training.losses.ohem_ce_loss module
- class super_gradients.training.losses.ohem_ce_loss.OhemCELoss(threshold: float, mining_percent: float = 0.1, ignore_lb: int = - 100, num_pixels_exclude_ignored: bool = True)[source]
Bases:
torch.nn.modules.loss._Loss
OhemCELoss - Online Hard Example Mining Cross Entropy Loss
- forward(logits, labels)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reduction: str
super_gradients.training.losses.r_squared_loss module
- class super_gradients.training.losses.r_squared_loss.RSquaredLoss(size_average=None, reduce=None, reduction: str = 'mean')[source]
Bases:
torch.nn.modules.loss._Loss
- forward(output, target)[source]
Computes the R-squared for the output and target values :param output: Tensor / Numpy / List
The prediction
- Parameters
target – Tensor / Numpy / List The corresponding lables
- reduction: str
super_gradients.training.losses.shelfnet_ohem_loss module
- class super_gradients.training.losses.shelfnet_ohem_loss.ShelfNetOHEMLoss(threshold: float = 0.7, mining_percent: float = 0.0001, ignore_lb: int = 255)[source]
Bases:
super_gradients.training.losses.ohem_ce_loss.OhemCELoss
- forward(predictions_list: list, targets)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reduction: str
- training: bool
super_gradients.training.losses.shelfnet_semantic_encoding_loss module
- class super_gradients.training.losses.shelfnet_semantic_encoding_loss.ShelfNetSemanticEncodingLoss(se_weight=0.2, nclass=21, aux_weight=0.4, weight=None, ignore_index=- 1)[source]
Bases:
torch.nn.modules.loss.CrossEntropyLoss
2D Cross Entropy Loss with Auxilary Loss
- forward(logits, labels)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- ignore_index: int
super_gradients.training.losses.ssd_loss module
- class super_gradients.training.losses.ssd_loss.SSDLoss(dboxes: super_gradients.training.utils.ssd_utils.DefaultBoxes, alpha: float = 1.0)[source]
Bases:
torch.nn.modules.loss._Loss
Implements the loss as the sum of the followings: 1. Confidence Loss: All labels, with hard negative mining 2. Localization Loss: Only on positive labels
- forward(predictions, targets)[source]
- Compute the loss
:param predictions - predictions tensor coming from the network. shape [N, num_classes+4, num_dboxes] were the first four items are (x,y,w,h) and the rest are class confidence :param targets - targets for the batch. [num targets, 6] (index in batch, label, x,y,w,h)
- match_dboxes(targets)[source]
convert ground truth boxes into a tensor with the same size as dboxes. each gt bbox is matched to every destination box which overlaps it over 0.5 (IoU). so some gt bboxes can be duplicated to a few destination boxes :param targets: a tensor containing the boxes for a single image. shape [num_boxes, 5] (x,y,w,h,label) :return: two tensors
boxes - shape of dboxes [4, num_dboxes] (x,y,w,h) labels - sahpe [num_dboxes]
- reduction: str
super_gradients.training.losses.yolo_v3_loss module
- class super_gradients.training.losses.yolo_v3_loss.YoLoV3DetectionLoss(model: torch.nn.modules.module.Module, cls_pw: float = 1.0, obj_pw: float = 1.0, giou: float = 3.54, obj: float = 64.3, cls: float = 37.4)[source]
Bases:
torch.nn.modules.loss._Loss
YoLoV3DetectionLoss - Loss Class for Object Detection
- forward(model_output, targets)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reduction: str
super_gradients.training.losses.yolo_v5_loss module
- class super_gradients.training.losses.yolo_v5_loss.YoLoV5DetectionLoss(anchors: super_gradients.training.utils.detection_utils.Anchors, cls_pos_weight: Union[float, List[float]] = 1.0, obj_pos_weight: float = 1.0, obj_loss_gain: float = 1.0, box_loss_gain: float = 0.05, cls_loss_gain: float = 0.5, focal_loss_gamma: float = 0.0, cls_objectness_weights: Optional[Union[List[float], torch.Tensor]] = None)[source]
Bases:
torch.nn.modules.loss._Loss
Calculate YOLO V5 loss: L = L_objectivness + L_boxes + L_classification
- build_targets(predictions: List[torch.Tensor], targets: torch.Tensor, anchor_threshold=4.0) → Tuple[List[torch.Tensor], List[torch.Tensor], List[Tuple[torch.Tensor]], List[torch.Tensor]][source]
- Assign targets to anchors to use in L_boxes & L_classification calculation:
each target can be assigned to a few anchors,
all anchors that are within [1/anchor_threshold, anchor_threshold] times target size range * each anchor can be assigned to a few targets
- Parameters
predictions – Yolo predictions
targets – ground truth targets
anchor_threshold – ratio defining a size range of an appropriate anchor
- Returns
each of 4 outputs contains one element for each Yolo output, correspondences are raveled over the whole batch and all anchors:
classes of the targets;
boxes of the targets;
image id in a batch, anchor id, grid y, grid x coordinates;
anchor sizes.
All the above can be indexed in parallel to get the selected correspondences
- compute_loss(predictions: List[torch.Tensor], targets: torch.Tensor, giou_loss_ratio: float = 1.0) → Tuple[torch.Tensor, torch.Tensor][source]
L = L_objectivness + L_boxes + L_classification where:
L_boxes and L_classification are calculated only between anchors and targets that suit them;
L_objectivness is calculated on all anchors.
- L_classification:
for anchors that have suitable ground truths in their grid locations add BCEs to force max probability for each GT class in a multi-label way Coef: self.cls_loss_gain
- L_boxes:
for anchors that have suitable ground truths in their grid locations add (1 - IoU), IoU between a predicted box and each GT box, force maximum IoU Coef: self.box_loss_gain
- L_objectness:
for each anchor add BCE to force a prediction of (1 - giou_loss_ratio) + giou_loss_ratio * IoU, IoU between a predicted box and random GT in it Coef: self.obj_loss_gain, loss from each YOLO grid is additionally multiplied by balance = [4.0, 1.0, 0.4]
to balance different contributions coming from different numbers of grid cells
- Parameters
predictions – output from all Yolo levels, each of shape [Batch x Num_Anchors x GridSizeY x GridSizeX x (4 + 1 + Num_classes)]
targets – [Num_targets x (4 + 2)], values on dim 1 are: image id in a batch, class, box x y w h
giou_loss_ratio – a coef in L_objectness defining what should be predicted as objecness in a call with a target: can be a value in [IoU, 1] range
- Returns
loss, all losses separately in a detached tensor
- forward(model_output, targets)[source]
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- reduction: str