callback.early_stopping
- class pydgn.training.callback.early_stopping.EarlyStopper(monitor: str, mode: str, checkpoint: bool = False)
Bases:
pydgn.training.event.handler.EventHandler
EarlyStopper is the main event handler for optimizers. Just create a subclass that implements an early stopping method.
- Parameters
monitor (str) – the metric to monitor. The format is
[TRAINING|VALIDATION]_[METRIC NAME]
, wherepydgn.static (TRAINING and VALIDATION are defined in) –
mode (str) – can be
MIN
orMAX
(as defined inpydgn.static
)checkpoint (bool) – whether we are interested in the checkpoint of the “best” epoch or not
- on_epoch_end(state: pydgn.training.event.state.State)
At the end of an epoch, check that the validation score improves over the current best validation score. If so, store the necessary info in a dictionary and save it into the “best_epoch_results” property of the state. If it is time to stop, updates the stop_training field of the state.
- Parameters
state (
State
) – object holding training information
- stop(state: pydgn.training.event.state.State, score_or_loss: str, metric: str) bool
Returns true when the early stopping technique decides it is time to stop.
- Parameters
state (
State
) – object holding training informationscore_or_loss (str) – whether to monitor scores or losses
metric (str) – the metric to consider. The format is
[TRAINING|VALIDATION]_[METRIC NAME]
, whereTRAINING
andVALIDATION
are defined inpydgn.static
- Returns
a boolean specifying whether training should be stopped or not
- class pydgn.training.callback.early_stopping.PatienceEarlyStopper(monitor, mode, patience=30, checkpoint=False)
Bases:
pydgn.training.callback.early_stopping.EarlyStopper
Early Stopper that implements patience
- Parameters
monitor (str) – the metric to monitor. The format is
[TRAINING|VALIDATION]_[METRIC NAME]
, wherepydgn.static (TRAINING and VALIDATION are defined in) –
mode (str) – can be
MIN
orMAX
(as defined inpydgn.static
)patience (int) – the number of epochs of patience
checkpoint (bool) – whether we are interested in the checkpoint of the “best” epoch or not
- stop(state, score_or_loss, metric)
Returns true when the early stopping technique decides it is time to stop.
- Parameters
state (
State
) – object holding training informationscore_or_loss (str) – whether to monitor scores or losses
metric (str) – the metric to consider. The format is
[TRAINING|VALIDATION]_[METRIC NAME]
, whereTRAINING
andVALIDATION
are defined inpydgn.static
- Returns
a boolean specifying whether training should be stopped or not
callback.engine_callback
- class pydgn.training.callback.engine_callback.EngineCallback(store_last_checkpoint: bool)
Bases:
pydgn.training.event.handler.EventHandler
Class responsible for fetching data and handling current-epoch checkpoints at training time.
- Parameters
store_last_checkpoint (bool) – if
True
, keep the model’s checkpoint for the last training epoch
- on_epoch_end(state: pydgn.training.event.state.State)
Stores the checkpoint in a dictionary with the following fields:
EPOCH
(as defined inpydgn.static
)MODEL_STATE
(as defined inpydgn.static
)OPTIMIZER_STATE
(as defined inpydgn.static
)SCHEDULER_STATE
(as defined inpydgn.static
)STOP_TRAINING
(as defined inpydgn.static
)
- Parameters
state (
State
) – object holding training information
- on_fetch_data(state: pydgn.training.event.state.State)
Load the next batch of data, possibly applying some kind of additional pre-processing not included in the
transform
package.- Parameters
state (
State
) – object holding training information
- Pre-condition:
The data loader is contained in
state.loader_iterable
and the minibatch ID (i.e., a counter) is stored in``state.id_batch``- Post-condition:
The
state
object now has a fieldbatch_input
with the next batch of data
- on_forward(state: pydgn.training.event.state.State)
Feed the input data to the model.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.batch_input
: the input to be fed to the modelstate.batch_targets
: the ground truth values to be fed to the model (if any, ow a dummy value can be used)
- Post-condition:
- The following fields have been initialized:
state.batch_outputs
: the output produced the model (a tuple of values)
callback.gradient_clipping
- class pydgn.training.callback.gradient_clipping.GradientClipper(gradient_clipper_class_name: str, **kwargs: dict)
Bases:
pydgn.training.event.handler.EventHandler
GradientClipper is the main event handler for gradient clippers. Just pass a PyTorch scheduler together with its arguments in the configuration file.
- Parameters
gradient_clipper_class_name (str) – the dotted path to the gradient clipper class name
kwargs (dict) – additional arguments
- on_backward(state: pydgn.training.event.state.State)
Updates the parameters of the model using loss information.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.batch_loss
: a dictionary holding the loss of the minibatch
callback.metric
- class pydgn.training.callback.metric.AdditiveLoss(use_as_loss, reduction='mean', use_nodes_batch_size=False, **losses: dict)
Bases:
pydgn.training.callback.metric.Metric
AdditiveLoss sums an arbitrary number of losses together.
- Parameters
use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when
on_backward()
is called). Used by PyDGN, no need to care about this.reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports
mean
andsum
. Default ismean
.use_nodes_batch_size (bool) – whether or not to use the # of nodes in the batch, rather than the number of graphs, to compute
epoch. (the metric's aggregated value for the entire) –
losses (dict) – dictionary of metrics to add together
- forward(targets: torch.Tensor, *outputs: List[torch.Tensor], batch_loss_extra: Optional[dict] = None) dict
Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.
- Parameters
targets (
torch.Tensor
) – ground truthoutputs (List[
torch.Tensor
]) – outputs of the modelbatch_loss_extra (dict) – dictionary of information computed by metrics used as losses
- Returns
A dictionary containing associations metric_name - value
- property name: str
- on_compute_metrics(state: pydgn.training.event.state.State)
Computes the metrics of interest using the output and ground truth information obtained so far. The loss-related subscriber MUST be called before the score-related one
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.batch_input
: the input to be fed to the modelstate.batch_targets
: the ground truth values to be fed to the model (if any, ow a dummy value can be used)state.batch_outputs
: the output produced the model (a tuple of values)
- Post-condition:
- The following fields have been initialized:
state.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_eval_batch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state after evaluating on a new minibatch of data.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: the dataset type (can beTRAINING
,VALIDATION
orTEST
)state.batch_num_graphs
: the total number of graphs in the minibatchstate.batch_num_nodes
: the total number of nodes in the minibatchstate.batch_num_targets
: the total number of ground truth values in the minibatchstate.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_eval_epoch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the end of an evaluation epoch.
- Parameters
state (
State
) – object holding training information
- Post-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- on_eval_epoch_start(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the start of an evaluation epoch.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: the dataset type (can beTRAINING
,VALIDATION
orTEST
)
- on_training_batch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state after training on a new minibatch of data.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: it must be set toTRAINING
state.batch_num_graphs
: the total number of graphs in the minibatchstate.batch_num_nodes
: the total number of nodes in the minibatchstate.batch_num_targets
: the total number of ground truth values in the minibatchstate.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_training_epoch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the end of a training epoch.
- Parameters
state (
State
) – object holding training information
- Post-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- on_training_epoch_start(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the start of a training epoch.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: it must be set toTRAINING
- training: bool
- class pydgn.training.callback.metric.Classification(use_as_loss=False, reduction='mean', use_nodes_batch_size=False)
Bases:
pydgn.training.callback.metric.Metric
Generic metric for classification tasks. Used to maximize code reuse for classical metrics.
- Parameters
use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when
on_backward()
is called). Used by PyDGN, no need to care about this.reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports
mean
andsum
. Default ismean
.use_nodes_batch_size (bool) – whether or not to use the # of nodes in the batch, rather than the number of graphs, to compute
epoch. (the metric's aggregated value for the entire) –
- forward(targets: torch.Tensor, *outputs: List[torch.Tensor], batch_loss_extra: Optional[dict] = None) dict
Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.
- Parameters
targets (
torch.Tensor
) – ground truthoutputs (List[
torch.Tensor
]) – outputs of the modelbatch_loss_extra (dict) – dictionary of information computed by metrics used as losses
- Returns
A dictionary containing associations metric_name - value
- property name: str
- training: bool
- class pydgn.training.callback.metric.DotProductLink(use_as_loss: bool = False, reduction: str = 'mean', use_nodes_batch_size: bool = False)
Bases:
pydgn.training.callback.metric.Metric
Implements a dot product link prediction metric, as defined in https://arxiv.org/abs/1611.07308.
- forward(targets: torch.Tensor, *outputs: List[torch.Tensor], batch_loss_extra: Optional[dict] = None) dict
Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.
- Parameters
targets (
torch.Tensor
) – ground truthoutputs (List[
torch.Tensor
]) – outputs of the modelbatch_loss_extra (dict) – dictionary of information computed by metrics used as losses
- Returns
A dictionary containing associations metric_name - value
- property name: str
- training: bool
- class pydgn.training.callback.metric.MeanSquareError(use_as_loss=False, reduction='mean', use_nodes_batch_size=False)
Bases:
pydgn.training.callback.metric.Regression
Wrapper around
torch.nn.MSELoss
- Parameters
use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when
on_backward()
is called). Used by PyDGN, no need to care about this..reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports
mean
andsum
. Default ismean
.use_nodes_batch_size (bool) – whether or not to use the # of nodes in the batch, rather than the number of graphs, to compute
epoch. (the metric's aggregated value for the entire) –
- property name: str
- training: bool
- class pydgn.training.callback.metric.Metric(use_as_loss: bool = False, reduction: str = 'mean', use_nodes_batch_size: bool = False)
Bases:
torch.nn.modules.module.Module
,pydgn.training.event.handler.EventHandler
Metric is the main event handler for all metrics. Other metrics can easily subclass by implementing the
forward()
method, though sometimes more complex implementations are required.- Parameters
use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when
on_backward()
is called). Used by PyDGN, no need to care about this.reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports
mean
andsum
. Default ismean
.use_nodes_batch_size (bool) – whether or not to use the # of nodes in the batch, rather than the number of graphs, to compute
epoch. (the metric's aggregated value for the entire) –
- forward(targets: torch.Tensor, *outputs: List[torch.Tensor], batch_loss_extra: Optional[dict] = None) dict
Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.
- Parameters
targets (
torch.Tensor
) – ground truthoutputs (List[
torch.Tensor
]) – outputs of the modelbatch_loss_extra (dict) – dictionary of information computed by metrics used as losses
- Returns
A dictionary containing associations metric_name - value
- get_main_metric_name() str
Return the metric’s main name. Useful when a metric is the combination of many.
- Returns
the metric’s main name
- property name: str
- on_backward(state: pydgn.training.event.state.State)
Updates the parameters of the model using loss information.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.batch_loss
: a dictionary holding the loss of the minibatch
- on_compute_metrics(state: pydgn.training.event.state.State)
Computes the metrics of interest using the output and ground truth information obtained so far. The loss-related subscriber MUST be called before the score-related one
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.batch_input
: the input to be fed to the modelstate.batch_targets
: the ground truth values to be fed to the model (if any, ow a dummy value can be used)state.batch_outputs
: the output produced the model (a tuple of values)
- Post-condition:
- The following fields have been initialized:
state.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_eval_batch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state after evaluating on a new minibatch of data.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: the dataset type (can beTRAINING
,VALIDATION
orTEST
)state.batch_num_graphs
: the total number of graphs in the minibatchstate.batch_num_nodes
: the total number of nodes in the minibatchstate.batch_num_targets
: the total number of ground truth values in the minibatchstate.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_eval_epoch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the end of an evaluation epoch.
- Parameters
state (
State
) – object holding training information
- Post-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- on_eval_epoch_start(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the start of an evaluation epoch.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: the dataset type (can beTRAINING
,VALIDATION
orTEST
)
- on_training_batch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state after training on a new minibatch of data.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: it must be set toTRAINING
state.batch_num_graphs
: the total number of graphs in the minibatchstate.batch_num_nodes
: the total number of nodes in the minibatchstate.batch_num_targets
: the total number of ground truth values in the minibatchstate.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_training_epoch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the end of a training epoch.
- Parameters
state (
State
) – object holding training information
- Post-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- on_training_epoch_start(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the start of a training epoch.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: it must be set toTRAINING
- training: bool
- class pydgn.training.callback.metric.MultiScore(use_as_loss, reduction='mean', use_nodes_batch_size=False, main_scorer=None, **extra_scorers)
Bases:
pydgn.training.callback.metric.Metric
This class is used to keep track of multiple additional metrics used as scores, rather than losses.
- Parameters
use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when
on_backward()
is called). Used by PyDGN, no need to care about this.reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports
mean
andsum
. Default ismean
.use_nodes_batch_size (bool) – whether or not to use the # of nodes in the batch, rather than the number of graphs, to compute
epoch. (the metric's aggregated value for the entire) –
- forward(targets: torch.Tensor, *outputs: List[torch.Tensor], batch_loss_extra: Optional[dict] = None) dict
Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.
- Parameters
targets (
torch.Tensor
) – ground truthoutputs (List[
torch.Tensor
]) – outputs of the modelbatch_loss_extra (dict) – dictionary of information computed by metrics used as losses
- Returns
A dictionary containing associations metric_name - value
- get_main_metric_name()
Return the metric’s main name. Useful when a metric is the combination of many.
- Returns
the metric’s main name
- property name: str
- on_compute_metrics(state: pydgn.training.event.state.State)
Computes the metrics of interest using the output and ground truth information obtained so far. The loss-related subscriber MUST be called before the score-related one
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.batch_input
: the input to be fed to the modelstate.batch_targets
: the ground truth values to be fed to the model (if any, ow a dummy value can be used)state.batch_outputs
: the output produced the model (a tuple of values)
- Post-condition:
- The following fields have been initialized:
state.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_eval_batch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state after evaluating on a new minibatch of data.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: the dataset type (can beTRAINING
,VALIDATION
orTEST
)state.batch_num_graphs
: the total number of graphs in the minibatchstate.batch_num_nodes
: the total number of nodes in the minibatchstate.batch_num_targets
: the total number of ground truth values in the minibatchstate.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_eval_epoch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the end of an evaluation epoch.
- Parameters
state (
State
) – object holding training information
- Post-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- on_eval_epoch_start(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the start of an evaluation epoch.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: the dataset type (can beTRAINING
,VALIDATION
orTEST
)
- on_training_batch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state after training on a new minibatch of data.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: it must be set toTRAINING
state.batch_num_graphs
: the total number of graphs in the minibatchstate.batch_num_nodes
: the total number of nodes in the minibatchstate.batch_num_targets
: the total number of ground truth values in the minibatchstate.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_training_epoch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the end of a training epoch.
- Parameters
state (
State
) – object holding training information
- Post-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- on_training_epoch_start(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the start of a training epoch.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: it must be set toTRAINING
- training: bool
- class pydgn.training.callback.metric.MulticlassAccuracy(use_as_loss: bool = False, reduction: str = 'mean', use_nodes_batch_size: bool = False)
Bases:
pydgn.training.callback.metric.Metric
Implements multiclass classification accuracy.
- forward(targets: torch.Tensor, *outputs: List[torch.Tensor], batch_loss_extra: Optional[dict] = None) dict
Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.
- Parameters
targets (
torch.Tensor
) – ground truthoutputs (List[
torch.Tensor
]) – outputs of the modelbatch_loss_extra (dict) – dictionary of information computed by metrics used as losses
- Returns
A dictionary containing associations metric_name - value
- property name: str
- training: bool
- class pydgn.training.callback.metric.MulticlassClassification(use_as_loss=False, reduction='mean', use_nodes_batch_size=False)
Bases:
pydgn.training.callback.metric.Classification
Wrapper around
torch.nn.CrossEntropyLoss
- Parameters
use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when
on_backward()
is called). Used by PyDGN, no need to care about this..reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports
mean
andsum
. Default ismean
.use_nodes_batch_size (bool) – whether or not to use the # of nodes in the batch, rather than the number of graphs, to compute
epoch. (the metric's aggregated value for the entire) –
- property name: str
- training: bool
- class pydgn.training.callback.metric.Regression(use_as_loss=False, reduction='mean', use_nodes_batch_size=False)
Bases:
pydgn.training.callback.metric.Metric
Generic metric for regression tasks. Used to maximize code reuse for classical metrics.
- Parameters
use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when
on_backward()
is called). Used by PyDGN, no need to care about this.reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports
mean
andsum
. Default ismean
.use_nodes_batch_size (bool) – whether or not to use the # of nodes in the batch, rather than the number of graphs, to compute
epoch. (the metric's aggregated value for the entire) –
- forward(targets: torch.Tensor, *outputs: List[torch.Tensor], batch_loss_extra: Optional[dict] = None) dict
Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.
- Parameters
targets (
torch.Tensor
) – ground truthoutputs (List[
torch.Tensor
]) – outputs of the modelbatch_loss_extra (dict) – dictionary of information computed by metrics used as losses
- Returns
A dictionary containing associations metric_name - value
- property name: str
- training: bool
- class pydgn.training.callback.metric.ToyMetric(use_as_loss=False, reduction='mean', use_nodes_batch_size=False)
Bases:
pydgn.training.callback.metric.Metric
Implements a toy metric.
- Parameters
use_as_loss (bool) – whether this metric should act as a loss (i.e., it should act when
on_backward()
is called). Used by PyDGN, no need to care about this..reduction (str) – the type of reduction to apply across samples of the mini-batch. Supports
mean
andsum
. Default ismean
.use_nodes_batch_size (bool) – whether or not to use the # of nodes in the batch, rather than the number of graphs, to compute
epoch. (the metric's aggregated value for the entire) –
- forward(targets: torch.Tensor, *outputs: List[torch.Tensor], batch_loss_extra: Optional[dict] = None) dict
Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.
- Parameters
targets (
torch.Tensor
) – ground truthoutputs (List[
torch.Tensor
]) – outputs of the modelbatch_loss_extra (dict) – dictionary of information computed by metrics used as losses
- Returns
A dictionary containing associations metric_name - value
- property name: str
- training: bool
- class pydgn.training.callback.metric.ToyUnsupervisedMetric(use_as_loss: bool = False, reduction: str = 'mean', use_nodes_batch_size: bool = False)
Bases:
pydgn.training.callback.metric.Metric
- forward(targets: torch.Tensor, *outputs: List[torch.Tensor], batch_loss_extra: Optional[dict] = None) dict
Computes the metric value. Optionally, and only for scores used as losses, some extra information can be also returned.
- Parameters
targets (
torch.Tensor
) – ground truthoutputs (List[
torch.Tensor
]) – outputs of the modelbatch_loss_extra (dict) – dictionary of information computed by metrics used as losses
- Returns
A dictionary containing associations metric_name - value
- property name: str
- training: bool
callback.optimizer
- class pydgn.training.callback.optimizer.Optimizer(model: pydgn.model.interface.ModelInterface, optimizer_class_name: str, accumulate_gradients: bool = False, **kwargs: dict)
Bases:
pydgn.training.event.handler.EventHandler
Optimizer is the main event handler for optimizers. Just pass a PyTorch optimizer together with its arguments in the configuration file.
- Parameters
model (
ModelInterface
) – the model that has to be trainedoptimizer_class_name (str) – dotted path to the optimizer class to use
accumulate_gradients (bool) – if
True
, accumulate mini-batch gradients to perform a batch gradient update without loading the entire batch in memorykwargs (dict) – additional parameters for the specific optimizer
- load_state_dict(state_dict)
- on_epoch_end(state)
Perform bookkeeping operations at the end of an epoch, e.g., early stopping, plotting, etc.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- Post-condition:
- The following fields have been initialized:
state.stop_training
: do/don’t train the modelstate.optimizer_state
: the internal state of the optimizer (can beNone
)state.scheduler_state
: the internal state of the scheduler (can beNone
)state.best_epoch_results
: a dictionary with the best results computed so far (can be used when resuming training, either for early stopping or to keep some information about the last checkpoint).
- on_fit_start(state)
Initialize an object at the beginning of the training phase, e.g., the internals of an optimizer, using the information contained in
state
.- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.initial_epoch
: the initial epoch from which to start/resume trainingstate.stop_training
: do/don’t train the modelstate.optimizer_state
: the internal state of the optimizer (can beNone
)state.scheduler_state
: the internal state of the scheduler (can beNone
)state.best_epoch_results
: a dictionary with the best results computed so far (can be used when resuming training, either for early stopping or to keep some information about the last checkpoint).
- on_training_batch_end(state)
Initialize/reset some internal state after training on a new minibatch of data.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: it must be set toTRAINING
state.batch_num_graphs
: the total number of graphs in the minibatchstate.batch_num_nodes
: the total number of nodes in the minibatchstate.batch_num_targets
: the total number of ground truth values in the minibatchstate.batch_loss
: a dictionary holding the loss of the minibatchstate.batch_loss_extra
: a dictionary containing extra info, e.g., intermediate loss scores etc.state.batch_score
: a dictionary holding the score of the minibatch
- on_training_batch_start(state)
Initialize/reset some internal state before training on a new minibatch of data.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: it must be set toTRAINING
state.batch_input
: the input to be fed to the modelstate.batch_targets
: the ground truth values to be fed to the model (if any, ow a dummy value can be used)state.batch_num_graphs
: the total number of graphs in the minibatchstate.batch_num_nodes
: the total number of nodes in the minibatchstate.batch_num_targets
: the total number of ground truth values in the minibatch
- on_training_epoch_end(state)
Initialize/reset some internal state at the end of a training epoch.
- Parameters
state (
State
) – object holding training information
- Post-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- on_training_epoch_start(state)
Initialize/reset some internal state at the start of a training epoch.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.set
: it must be set toTRAINING
callback.plotter
- class pydgn.training.callback.plotter.Plotter(exp_path: str, **kwargs: dict)
Bases:
pydgn.training.event.handler.EventHandler
Plotter is the main event handler for plotting at training time.
- Parameters
exp_path (str) – path where to store the Tensorboard logs
keargs (dict) – additional arguments that may depend on the plotter
- on_epoch_end(state: pydgn.training.event.state.State)
Perform bookkeeping operations at the end of an epoch, e.g., early stopping, plotting, etc.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- Post-condition:
- The following fields have been initialized:
state.stop_training
: do/don’t train the modelstate.optimizer_state
: the internal state of the optimizer (can beNone
)state.scheduler_state
: the internal state of the scheduler (can beNone
)state.best_epoch_results
: a dictionary with the best results computed so far (can be used when resuming training, either for early stopping or to keep some information about the last checkpoint).
- on_fit_end(state: pydgn.training.event.state.State)
Training has ended, free all resources, e.g., close Tensorboard writers.
- Parameters
state (
State
) – object holding training information
callback.scheduler
- class pydgn.training.callback.scheduler.EpochScheduler(scheduler_class_name: str, optimizer: torch.optim.optimizer.Optimizer, **kwargs: dict)
Bases:
pydgn.training.callback.scheduler.Scheduler
Implements a scheduler which uses epochs to modify the step size
- on_training_epoch_end(state: pydgn.training.event.state.State)
Initialize/reset some internal state at the end of a training epoch.
- Parameters
state (
State
) – object holding training information
- Post-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- class pydgn.training.callback.scheduler.MetricScheduler(scheduler_class_name: str, use_loss: bool, monitor: str, optimizer: torch.optim.optimizer.Optimizer, **kwargs: dict)
Bases:
pydgn.training.callback.scheduler.Scheduler
Implements a scheduler which uses variations in the metric of interest to modify the step size
- Parameters
scheduler_class_name (str) – dotted path to class name of the scheduler
use_loss (str) – whether to monitor scores or losses
monitor (str) – the metric to monitor. The format is
[TRAINING|VALIDATION]_[METRIC NAME]
, whereTRAINING
andVALIDATION
are defined inpydgn.static
optimizer (
torch.optim.optimizer
) – the Pytorch optimizer to use. This is automatically recovered by PyDGN when providing an optimizerkwargs – additional parameters for the specific scheduler to be used
- on_epoch_end(state: pydgn.training.event.state.State)
Perform bookkeeping operations at the end of an epoch, e.g., early stopping, plotting, etc.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- Post-condition:
- The following fields have been initialized:
state.stop_training
: do/don’t train the modelstate.optimizer_state
: the internal state of the optimizer (can beNone
)state.scheduler_state
: the internal state of the scheduler (can beNone
)state.best_epoch_results
: a dictionary with the best results computed so far (can be used when resuming training, either for early stopping or to keep some information about the last checkpoint).
- class pydgn.training.callback.scheduler.Scheduler(scheduler_class_name: str, optimizer: torch.optim.optimizer.Optimizer, **kwargs: dict)
Bases:
pydgn.training.event.handler.EventHandler
Scheduler is the main event handler for schedulers. Just pass a PyTorch scheduler together with its arguments in the configuration file.
- Parameters
scheduler_class_name (str) – dotted path to class name of the scheduler
optimizer (
torch.optim.optimizer
) – the Pytorch optimizer to use. This is automatically recovered by PyDGN when providing an optimizerkwargs – additional parameters for the specific scheduler to be used
- on_epoch_end(state: pydgn.training.event.state.State)
Perform bookkeeping operations at the end of an epoch, e.g., early stopping, plotting, etc.
- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.epoch_loss
: a dictionary containing the aggregated loss value across all minibatchesstate.epoch_score
: a dictionary containing the aggregated score value across all minibatches
- Post-condition:
- The following fields have been initialized:
state.stop_training
: do/don’t train the modelstate.optimizer_state
: the internal state of the optimizer (can beNone
)state.scheduler_state
: the internal state of the scheduler (can beNone
)state.best_epoch_results
: a dictionary with the best results computed so far (can be used when resuming training, either for early stopping or to keep some information about the last checkpoint).
- on_fit_start(state: pydgn.training.event.state.State)
Initialize an object at the beginning of the training phase, e.g., the internals of an optimizer, using the information contained in
state
.- Parameters
state (
State
) – object holding training information
- Pre-condition:
- The following fields have been initialized:
state.initial_epoch
: the initial epoch from which to start/resume trainingstate.stop_training
: do/don’t train the modelstate.optimizer_state
: the internal state of the optimizer (can beNone
)state.scheduler_state
: the internal state of the scheduler (can beNone
)state.best_epoch_results
: a dictionary with the best results computed so far (can be used when resuming training, either for early stopping or to keep some information about the last checkpoint).