super_gradients.training.metrics package
Submodules
super_gradients.training.metrics.classification_metrics module
- super_gradients.training.metrics.classification_metrics.accuracy(output, target, topk=(1))[source]
Computes the precision@k for the specified values of k :param output: Tensor / Numpy / List
The prediction
- Parameters
target – Tensor / Numpy / List The corresponding lables
topk – tuple The type of accuracy to calculate, e.g. topk=(1,5) returns accuracy for top-1 and top-5
- class super_gradients.training.metrics.classification_metrics.Accuracy(dist_sync_on_step=False)[source]
Bases:
torchmetrics.classification.accuracy.Accuracy
- update(preds: torch.Tensor, target: torch.Tensor)[source]
Update state with predictions and targets. See pages/classification:input types for more information on input types.
- Parameters
preds – Predictions from model (logits, probabilities, or labels)
target – Ground truth labels
- correct: torch.Tensor
- total: torch.Tensor
- mode: DataType
- training: bool
- class super_gradients.training.metrics.classification_metrics.Top5(dist_sync_on_step=False)[source]
Bases:
torchmetrics.metric.Metric
- class super_gradients.training.metrics.classification_metrics.ToyTestClassificationMetric(dist_sync_on_step=False)[source]
Bases:
torchmetrics.metric.Metric
Dummy classification Mettric object returning 0 always (for testing).
super_gradients.training.metrics.detection_metrics module
- class super_gradients.training.metrics.detection_metrics.DetectionMetrics(num_cls: int, post_prediction_callback: Optional[super_gradients.training.utils.detection_utils.DetectionPostPredictionCallback] = None, normalize_targets: bool = False, iou_thres: super_gradients.training.utils.detection_utils.IouThreshold = <IouThreshold.MAP_05_TO_095: (0.5, 0.95)>, recall_thres: Optional[torch.Tensor] = None, score_thres: float = 0.1, top_k_predictions: int = 100, dist_sync_on_step: bool = False, accumulate_on_cpu: bool = True)[source]
Bases:
torchmetrics.metric.Metric
Metric class for computing F1, Precision, Recall and Mean Average Precision.
- num_cls
Number of classes.
- post_prediction_callback
DetectionPostPredictionCallback to be applied on net’s output prior to the metric computation (NMS).
- normalize_targets
Whether to normalize bbox coordinates by image size (default=False).
- iou_thresholds
IoU threshold to compute the mAP (default=torch.linspace(0.5, 0.95, 10)).
- recall_thresholds
Recall threshold to compute the mAP (default=torch.linspace(0, 1, 101)).
- score_threshold
Score threshold to compute Recall, Precision and F1 (default=0.1)
- top_k_predictions
Number of predictions per class used to compute metrics, ordered by confidence score (default=100)
- dist_sync_on_step
Synchronize metric state across processes at each
forward()
before returning the value at the step. (default=False)- accumulate_on_cpu: Run on CPU regardless of device used in other parts.
This is to avoid “CUDA out of memory” that might happen on GPU (default False)
- update(preds, target: torch.Tensor, device: str, inputs: torch._VariableFunctionsClass.tensor, crowd_targets: Optional[torch.Tensor] = None)[source]
Apply NMS and match all the predictions and targets of a given batch, and update the metric state accordingly.
- :param predsRaw output of the model, the format might change from one model to another, but has to fit
the input format of the post_prediction_callback
- Parameters
target – Targets for all images of shape (total_num_targets, 6) format: (index, x, y, w, h, label) where x,y,w,h are in range [0,1]
device – Device to run on
inputs – Input image tensor of shape (batch_size, n_img, height, width)
crowd_targets – Crowd targets for all images of shape (total_num_targets, 6) format: (index, x, y, w, h, label) where x,y,w,h are in range [0,1]
super_gradients.training.metrics.metric_utils module
- super_gradients.training.metrics.metric_utils.get_logging_values(loss_loggings: super_gradients.training.utils.utils.AverageMeter, metrics: torchmetrics.collections.MetricCollection, criterion=None)[source]
@param loss_loggings: AverageMeter running average for the loss items @param metrics: MetricCollection object for running user specified metrics @param criterion the object loss_loggings average meter is monitoring, when set to None- only the metrics values are computed and returned.
@return: tuple of the computed values
- super_gradients.training.metrics.metric_utils.get_metrics_titles(metrics_collection: torchmetrics.collections.MetricCollection)[source]
@param metrics_collection: MetricCollection object for running user specified metrics @return: list of all the names of the computed values list(str)
- super_gradients.training.metrics.metric_utils.get_metrics_results_tuple(metrics_collection: torchmetrics.collections.MetricCollection)[source]
@param metrics_collection: metrics collection of the user specified metrics @type metrics_collection @return: tuple of metrics values
- super_gradients.training.metrics.metric_utils.flatten_metrics_dict(metrics_dict: dict)[source]
:param metrics_dict - dictionary of metric values where values can also be dictionaries containing subvalues (in the case of compound metrics)
@return: flattened dict of metric values i.e {metric1_name: metric1_value…}
- super_gradients.training.metrics.metric_utils.get_metrics_dict(metrics_tuple, metrics_collection, loss_logging_item_names)[source]
Returns a dictionary with the epoch results as values and their names as keys. @param metrics_tuple: the result tuple @param metrics_collection: MetricsCollection @param loss_logging_item_names: loss component’s names. @return: dict
- super_gradients.training.metrics.metric_utils.get_train_loop_description_dict(metrics_tuple, metrics_collection, loss_logging_item_names, **log_items)[source]
- Returns a dictionary with the epoch’s logging items as values and their names as keys, with the purpose of
passing it as a description to tqdm’s progress bar.
@param metrics_tuple: the result tuple @param metrics_collection: MetricsCollection @param loss_logging_item_names: loss component’s names. @param log_items additional logging items to be rendered. @return: dict
super_gradients.training.metrics.segmentation_metrics module
- super_gradients.training.metrics.segmentation_metrics.batch_pix_accuracy(predict, target)[source]
Batch Pixel Accuracy :param predict: input 4D tensor :param target: label 3D tensor
- super_gradients.training.metrics.segmentation_metrics.batch_intersection_union(predict, target, nclass)[source]
Batch Intersection of Union :param predict: input 4D tensor :param target: label 3D tensor :param nclass: number of categories (int)
- super_gradients.training.metrics.segmentation_metrics.intersection_and_union(im_pred, im_lab, num_class)[source]
- class super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn[source]
Bases:
abc.ABC
Abstract preprocess metrics arguments class.
- class super_gradients.training.metrics.segmentation_metrics.PreprocessSegmentationMetricsArgs(apply_arg_max: bool = False, apply_sigmoid: bool = False)[source]
Bases:
super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn
Default segmentation inputs preprocess function before updating segmentation metrics, handles multiple inputs and apply normalizations.
- class super_gradients.training.metrics.segmentation_metrics.PixelAccuracy(ignore_label=- 100, dist_sync_on_step=False, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
torchmetrics.metric.Metric
- class super_gradients.training.metrics.segmentation_metrics.IoU(num_classes: int, dist_sync_on_step: bool = False, ignore_index: Optional[int] = None, reduction: str = 'elementwise_mean', threshold: float = 0.5, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
torchmetrics.classification.jaccard.JaccardIndex
- update(preds, target: torch.Tensor)[source]
Update state with predictions and targets.
- Parameters
preds – Predictions from model
target – Ground truth values
- confmat: torch.Tensor
- training: bool
- class super_gradients.training.metrics.segmentation_metrics.Dice(num_classes: int, dist_sync_on_step: bool = False, ignore_index: Optional[int] = None, reduction: str = 'elementwise_mean', threshold: float = 0.5, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
torchmetrics.classification.jaccard.JaccardIndex
- update(preds, target: torch.Tensor)[source]
Update state with predictions and targets.
- Parameters
preds – Predictions from model
target – Ground truth values
- confmat: torch.Tensor
- training: bool
- class super_gradients.training.metrics.segmentation_metrics.BinaryIOU(dist_sync_on_step=True, ignore_index: Optional[int] = None, threshold: float = 0.5, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
super_gradients.training.metrics.segmentation_metrics.IoU
- confmat: torch.Tensor
- training: bool
- class super_gradients.training.metrics.segmentation_metrics.BinaryDice(dist_sync_on_step=True, ignore_index: Optional[int] = None, threshold: float = 0.5, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
super_gradients.training.metrics.segmentation_metrics.Dice
- confmat: torch.Tensor
- training: bool
Module contents
- super_gradients.training.metrics.accuracy(output, target, topk=(1))[source]
Computes the precision@k for the specified values of k :param output: Tensor / Numpy / List
The prediction
- Parameters
target – Tensor / Numpy / List The corresponding lables
topk – tuple The type of accuracy to calculate, e.g. topk=(1,5) returns accuracy for top-1 and top-5
- class super_gradients.training.metrics.Accuracy(dist_sync_on_step=False)[source]
Bases:
torchmetrics.classification.accuracy.Accuracy
- update(preds: torch.Tensor, target: torch.Tensor)[source]
Update state with predictions and targets. See pages/classification:input types for more information on input types.
- Parameters
preds – Predictions from model (logits, probabilities, or labels)
target – Ground truth labels
- correct: torch.Tensor
- total: torch.Tensor
- mode: DataType
- training: bool
- class super_gradients.training.metrics.Top5(dist_sync_on_step=False)[source]
Bases:
torchmetrics.metric.Metric
- class super_gradients.training.metrics.ToyTestClassificationMetric(dist_sync_on_step=False)[source]
Bases:
torchmetrics.metric.Metric
Dummy classification Mettric object returning 0 always (for testing).
- class super_gradients.training.metrics.DetectionMetrics(num_cls: int, post_prediction_callback: Optional[super_gradients.training.utils.detection_utils.DetectionPostPredictionCallback] = None, normalize_targets: bool = False, iou_thres: super_gradients.training.utils.detection_utils.IouThreshold = <IouThreshold.MAP_05_TO_095: (0.5, 0.95)>, recall_thres: Optional[torch.Tensor] = None, score_thres: float = 0.1, top_k_predictions: int = 100, dist_sync_on_step: bool = False, accumulate_on_cpu: bool = True)[source]
Bases:
torchmetrics.metric.Metric
Metric class for computing F1, Precision, Recall and Mean Average Precision.
- num_cls
Number of classes.
- post_prediction_callback
DetectionPostPredictionCallback to be applied on net’s output prior to the metric computation (NMS).
- normalize_targets
Whether to normalize bbox coordinates by image size (default=False).
- iou_thresholds
IoU threshold to compute the mAP (default=torch.linspace(0.5, 0.95, 10)).
- recall_thresholds
Recall threshold to compute the mAP (default=torch.linspace(0, 1, 101)).
- score_threshold
Score threshold to compute Recall, Precision and F1 (default=0.1)
- top_k_predictions
Number of predictions per class used to compute metrics, ordered by confidence score (default=100)
- dist_sync_on_step
Synchronize metric state across processes at each
forward()
before returning the value at the step. (default=False)- accumulate_on_cpu: Run on CPU regardless of device used in other parts.
This is to avoid “CUDA out of memory” that might happen on GPU (default False)
- update(preds, target: torch.Tensor, device: str, inputs: torch._VariableFunctionsClass.tensor, crowd_targets: Optional[torch.Tensor] = None)[source]
Apply NMS and match all the predictions and targets of a given batch, and update the metric state accordingly.
- :param predsRaw output of the model, the format might change from one model to another, but has to fit
the input format of the post_prediction_callback
- Parameters
target – Targets for all images of shape (total_num_targets, 6) format: (index, x, y, w, h, label) where x,y,w,h are in range [0,1]
device – Device to run on
inputs – Input image tensor of shape (batch_size, n_img, height, width)
crowd_targets – Crowd targets for all images of shape (total_num_targets, 6) format: (index, x, y, w, h, label) where x,y,w,h are in range [0,1]
- class super_gradients.training.metrics.PreprocessSegmentationMetricsArgs(apply_arg_max: bool = False, apply_sigmoid: bool = False)[source]
Bases:
super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn
Default segmentation inputs preprocess function before updating segmentation metrics, handles multiple inputs and apply normalizations.
- class super_gradients.training.metrics.PixelAccuracy(ignore_label=- 100, dist_sync_on_step=False, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
torchmetrics.metric.Metric
- class super_gradients.training.metrics.IoU(num_classes: int, dist_sync_on_step: bool = False, ignore_index: Optional[int] = None, reduction: str = 'elementwise_mean', threshold: float = 0.5, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
torchmetrics.classification.jaccard.JaccardIndex
- update(preds, target: torch.Tensor)[source]
Update state with predictions and targets.
- Parameters
preds – Predictions from model
target – Ground truth values
- confmat: torch.Tensor
- training: bool
- class super_gradients.training.metrics.Dice(num_classes: int, dist_sync_on_step: bool = False, ignore_index: Optional[int] = None, reduction: str = 'elementwise_mean', threshold: float = 0.5, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
torchmetrics.classification.jaccard.JaccardIndex
- update(preds, target: torch.Tensor)[source]
Update state with predictions and targets.
- Parameters
preds – Predictions from model
target – Ground truth values
- confmat: torch.Tensor
- training: bool
- class super_gradients.training.metrics.BinaryIOU(dist_sync_on_step=True, ignore_index: Optional[int] = None, threshold: float = 0.5, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
super_gradients.training.metrics.segmentation_metrics.IoU
- confmat: torch.Tensor
- training: bool
- class super_gradients.training.metrics.BinaryDice(dist_sync_on_step=True, ignore_index: Optional[int] = None, threshold: float = 0.5, metrics_args_prep_fn: Optional[super_gradients.training.metrics.segmentation_metrics.AbstractMetricsArgsPrepFn] = None)[source]
Bases:
super_gradients.training.metrics.segmentation_metrics.Dice
- confmat: torch.Tensor
- training: bool