super_gradients.training.models package

Submodules

super_gradients.training.models.all_architectures module

super_gradients.training.models.csp_darknet53 module

super_gradients.training.models.darknet53 module

super_gradients.training.models.ddrnet module

super_gradients.training.models.densenet module

super_gradients.training.models.dpn module

super_gradients.training.models.efficientnet module

super_gradients.training.models.googlenet module

super_gradients.training.models.laddernet module

super_gradients.training.models.lenet module

super_gradients.training.models.mobilenet module

super_gradients.training.models.mobilenetv2 module

super_gradients.training.models.mobilenetv3 module

super_gradients.training.models.pnasnet module

super_gradients.training.models.preact_resnet module

super_gradients.training.models.regnet module

super_gradients.training.models.repvgg module

super_gradients.training.models.resnet module

super_gradients.training.models.resnext module

super_gradients.training.models.senet module

super_gradients.training.models.sg_module module

class super_gradients.training.models.sg_module.SgModule[source]

Bases: torch.nn.modules.module.Module

initialize_param_groups(lr: float, training_params: super_gradients.training.utils.utils.HpmStruct)list[source]
Returns

list of dictionaries containing the key ‘named_params’ with a list of named params

update_param_groups(param_groups: list, lr: float, epoch: int, iter: int, training_params: super_gradients.training.utils.utils.HpmStruct, total_batch: int)list[source]
Parameters

param_groups – list of dictionaries containing the params

Returns

list of dictionaries containing the params

get_include_attributes()list[source]

This function is used by the EMA. When updating the EMA model, some attributes of the main model (used in training) are updated to the EMA model along with the model weights. By default, all attributes are updated except for private attributes (starting with ‘_’) You can either set include_attributes or exclude_attributes. By returning a non empty list from this function, you override the default behaviour and only attributes named in this list will be updated. Note: This will also override the get_exclude_attributes list.

return

list of attributes to update from main model to EMA model

get_exclude_attributes()list[source]

This function is used by the EMA. When updating the EMA model, some attributes of the main model (used in training) are updated to the EMA model along with the model weights. By default, all attributes are updated except for private attributes (starting with ‘_’) You can either set include_attributes or exclude_attributes. By returning a non empty list from this function, you override the default behaviour and attributes named in this list will also be excluded from update. Note: if get_include_attributes is not empty, it will override this list.

return

list of attributes to not update from main model to EMA mode

prep_model_for_conversion(input_size: Optional[Union[tuple, list]] = None, **kwargs)[source]

Prepare the model to be converted to ONNX or other frameworks. Typically, this function will freeze the size of layers which is otherwise flexible, replace some modules with convertible substitutes and remove all auxiliary or training related parts. :param input_size: [H,W]

replace_head(**kwargs)[source]

Replace final layer for pretrained models. Since this varies between architectures, we leave it to the inheriting class to implement.

training: bool

super_gradients.training.models.shelfnet module

super_gradients.training.models.shufflenet module

super_gradients.training.models.shufflenetv2 module

super_gradients.training.models.ssd module

super_gradients.training.models.vgg module

super_gradients.training.models.yolov3 module

super_gradients.training.models.yolov5 module

Module contents