super_gradients.training.metrics package

Submodules

super_gradients.training.metrics.classification_metrics module

super_gradients.training.metrics.classification_metrics.accuracy(output, target, topk=(1))[source]

Computes the precision@k for the specified values of k :param output: Tensor / Numpy / List

The prediction

Parameters
  • target – Tensor / Numpy / List The corresponding lables

  • topk – tuple The type of accuracy to calculate, e.g. topk=(1,5) returns accuracy for top-1 and top-5

class super_gradients.training.metrics.classification_metrics.Accuracy(dist_sync_on_step=False)[source]

Bases: torchmetrics.classification.accuracy.Accuracy

update(preds: torch.Tensor, target: torch.Tensor)[source]

Update state with predictions and targets. See references/modules:input types for more information on input types.

Parameters
  • preds – Predictions from model (logits, probabilities, or labels)

  • target – Ground truth labels

correct: torch.Tensor
total: torch.Tensor
class super_gradients.training.metrics.classification_metrics.Top5(dist_sync_on_step=False)[source]

Bases: torchmetrics.metric.Metric

update(preds: torch.Tensor, target: torch.Tensor)[source]

Override this method to update the state variables of your metric class.

compute()[source]

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

class super_gradients.training.metrics.classification_metrics.ToyTestClassificationMetric(dist_sync_on_step=False)[source]

Bases: torchmetrics.metric.Metric

Dummy classification Mettric object returning 0 always (for testing).

update(preds: torch.Tensor, target: torch.Tensor)None[source]

Override this method to update the state variables of your metric class.

compute()[source]

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

super_gradients.training.metrics.detection_metrics module

super_gradients.training.metrics.detection_metrics.compute_ap(recall, precision, method: str = 'interp')[source]

Compute the average precision, given the recall and precision curves. Source: https://github.com/rbgirshick/py-faster-rcnn. # Arguments

param recall

The recall curve - ndarray [1, points in curve]

param precision

The precision curve - ndarray [1, points in curve]

param method

‘continuous’, ‘interp’

# Returns

The average precision as computed in py-faster-rcnn.

super_gradients.training.metrics.detection_metrics.ap_per_class(tp, conf, pred_cls, target_cls)[source]

Compute the average precision, given the recall and precision curves. Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. # Arguments

tp: True positives (nparray, nx1 or nx10). conf: Objectness value from 0-1 (nparray). pred_cls: Predicted object classes (nparray). target_cls: True object classes (nparray).

# Returns

The average precision as computed in py-faster-rcnn.

class super_gradients.training.metrics.detection_metrics.DetectionMetrics(num_cls, post_prediction_callback: Optional[super_gradients.training.utils.detection_utils.DetectionPostPredictionCallback] = None, iou_thres: super_gradients.training.utils.detection_utils.IouThreshold = <IouThreshold.MAP_05_TO_095: (0.5, 0.95)>, dist_sync_on_step=False)[source]

Bases: torchmetrics.metric.Metric

update(preds: torch.Tensor, target: torch.Tensor, device, inputs)[source]

Override this method to update the state variables of your metric class.

compute()[source]

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

super_gradients.training.metrics.metric_utils module

super_gradients.training.metrics.metric_utils.calc_batch_prediction_detection_metrics_per_class(metrics, dataset_interface, iou_thres, silent_mode, images_counter, per_class_verbosity, class_names, test_loss)[source]
super_gradients.training.metrics.metric_utils.get_logging_values(loss_loggings: super_gradients.training.utils.utils.AverageMeter, metrics: torchmetrics.collections.MetricCollection, criterion=None)[source]

@param loss_loggings: AverageMeter running average for the loss items @param metrics: MetricCollection object for running user specified metrics @param criterion the object loss_loggings average meter is monitoring, when set to None- only the metrics values are computed and returned.

@return: tuple of the computed values

super_gradients.training.metrics.metric_utils.get_metrics_titles(metrics_collection: torchmetrics.collections.MetricCollection)[source]

@param metrics_collection: MetricCollection object for running user specified metrics @return: list of all the names of the computed values list(str)

super_gradients.training.metrics.metric_utils.get_metrics_results_tuple(metrics_collection: torchmetrics.collections.MetricCollection)[source]

@param metrics_collection: metrics collection of the user specified metrics @type metrics_collection @return: tuple of metrics values

super_gradients.training.metrics.metric_utils.flatten_metrics_dict(metrics_dict: dict)[source]

:param metrics_dict - dictionary of metric values where values can also be dictionaries containing subvalues (in the case of compound metrics)

@return: flattened dict of metric values i.e {metric1_name: metric1_value…}

super_gradients.training.metrics.metric_utils.get_metrics_dict(metrics_tuple, metrics_collection, loss_logging_item_names)[source]

Returns a dictionary with the epoch results as values and their names as keys. @param metrics_tuple: the result tuple @param metrics_collection: MetricsCollection @param loss_logging_item_names: loss component’s names. @return: dict

super_gradients.training.metrics.metric_utils.get_train_loop_description_dict(metrics_tuple, metrics_collection, loss_logging_item_names, **log_items)[source]
Returns a dictionary with the epoch’s logging items as values and their names as keys, with the purpose of

passing it as a description to tqdm’s progress bar.

@param metrics_tuple: the result tuple @param metrics_collection: MetricsCollection @param loss_logging_item_names: loss component’s names. @param log_items additional logging items to be rendered. @return: dict

super_gradients.training.metrics.segmentation_metrics module

super_gradients.training.metrics.segmentation_metrics.batch_pix_accuracy(predict, target)[source]

Batch Pixel Accuracy :param predict: input 4D tensor :param target: label 3D tensor

super_gradients.training.metrics.segmentation_metrics.batch_intersection_union(predict, target, nclass)[source]

Batch Intersection of Union :param predict: input 4D tensor :param target: label 3D tensor :param nclass: number of categories (int)

super_gradients.training.metrics.segmentation_metrics.pixel_accuracy(im_pred, im_lab)[source]
super_gradients.training.metrics.segmentation_metrics.intersection_and_union(im_pred, im_lab, num_class)[source]
class super_gradients.training.metrics.segmentation_metrics.PixelAccuracy(ignore_label=- 100, dist_sync_on_step=False)[source]

Bases: torchmetrics.metric.Metric

update(preds: torch.Tensor, target: torch.Tensor)[source]

Override this method to update the state variables of your metric class.

compute()[source]

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

class super_gradients.training.metrics.segmentation_metrics.IoU(num_classes, dist_sync_on_step=True, ignore_index=None)[source]

Bases: torchmetrics.classification.iou.IoU

update(preds, target: torch.Tensor)[source]

Update state with predictions and targets.

Parameters
  • preds – Predictions from model

  • target – Ground truth values

confmat: torch.Tensor

Module contents