super_gradients.training.utils.optimizers package

Submodules

super_gradients.training.utils.optimizers.rmsprop_tf module

class super_gradients.training.utils.optimizers.rmsprop_tf.RMSpropTF(params, lr=0.01, alpha=0.9, eps=1e-10, weight_decay=0, momentum=0.0, centered=False, decoupled_decay=False, lr_in_momentum=True)[source]

Bases: torch.optim.optimizer.Optimizer

Implements RMSprop algorithm (TensorFlow style epsilon) NOTE: This is a direct cut-and-paste of PyTorch RMSprop with eps applied before sqrt and a few other modifications to closer match Tensorflow for matching hyper-params. Noteworthy changes include: 1. Epsilon applied inside square-root 2. square_avg initialized to ones 3. LR scaling of update accumulated in momentum buffer Proposed by G. Hinton in his course. The centered version first appears in Generating Sequences With Recurrent Neural Networks.

step(closure=None)[source]

Performs a single optimization step. :param closure: A closure that reevaluates the model

and returns the loss.

Module contents