RNN Transducer Model¶
RNN Transducer Model¶
-
class
openspeech.models.rnn_transducer.model.
RNNTransducerModel
(configs: omegaconf.dictconfig.DictConfig, vocab: openspeech.vocabs.vocab.Vocabulary)[source]¶ RNN-Transducer are a form of sequence-to-sequence models that do not employ attention mechanisms. Unlike most sequence-to-sequence models, which typically need to process the entire input sequence (the waveform in our case) to produce an output (the sentence), the RNN-T continuously processes input samples and streams output symbols, a property that is welcome for speech dictation. In our implementation, the output symbols are the characters of the alphabet.
- Parameters
configs (DictConfig) – configuration set.
vocab (Vocabulary) – the class of vocabulary
- Inputs:
- inputs (torch.FloatTensor): A input sequence passed to encoders. Typically for inputs this will be
a padded FloatTensor of size
(batch, seq_length, dimension)
.
input_lengths (torch.LongTensor): The length of input tensor.
(batch)
- Returns
Result of model predictions.
- Return type
outputs (dict)
-
forward
(inputs: torch.Tensor, input_lengths: torch.Tensor) → Dict[str, torch.Tensor][source]¶ Forward propagate a inputs and targets pair for inference.
- Inputs:
- inputs (torch.FloatTensor): A input sequence passed to encoders. Typically for inputs this will be a padded
FloatTensor of size
(batch, seq_length, dimension)
.
input_lengths (torch.LongTensor): The length of input tensor.
(batch)
- Returns
Result of model predictions.
- Return type
outputs (dict)
-
test_step
(batch: tuple, batch_idx: int) → collections.OrderedDict[source]¶ Forward propagate a inputs and targets pair for test.
- Inputs:
batch (tuple): A train batch contains inputs, targets, input_lengths, target_lengths batch_idx (int): The index of batch
- Returns
loss for training
- Return type
loss (torch.Tensor)
-
training_step
(batch: tuple, batch_idx: int) → collections.OrderedDict[source]¶ Forward propagate a inputs and targets pair for training.
- Inputs:
batch (tuple): A train batch contains inputs, targets, input_lengths, target_lengths batch_idx (int): The index of batch
- Returns
loss for training
- Return type
loss (torch.Tensor)
-
validation_step
(batch: tuple, batch_idx: int) → collections.OrderedDict[source]¶ Forward propagate a inputs and targets pair for validation.
- Inputs:
batch (tuple): A train batch contains inputs, targets, input_lengths, target_lengths batch_idx (int): The index of batch
- Returns
loss for training
- Return type
loss (torch.Tensor)
RNN Transducer Model Configuration¶
-
class
openspeech.models.rnn_transducer.configurations.
RNNTransducerConfigs
(model_name: str = 'rnn_transducer', encoder_hidden_state_dim: int = 320, decoder_hidden_state_dim: int = 512, num_encoder_layers: int = 4, num_decoder_layers: int = 1, encoder_dropout_p: float = 0.2, decoder_dropout_p: float = 0.2, bidirectional: bool = True, rnn_type: str = 'lstm', output_dim: int = 512, optimizer: str = 'adam')[source]¶ This is the configuration class to store the configuration of a
RNNTransducer
.It is used to initiated an RNNTransducer model.
Configuration objects inherit from :class: ~openspeech.dataclass.configs.OpenspeechDataclass.
- Configurations:
model_name (str): Model name (default: transformer_transducer) encoder_hidden_state_dim (int): Hidden state dimension of encoder (default: 312) decoder_hidden_state_dim (int): Hidden state dimension of decoder (default: 512) num_encoder_layers (int): The number of encoder layers. (default: 4) num_decoder_layers (int): The number of decoder layers. (default: 1) encoder_dropout_p (float): The dropout probability of encoder. (default: 0.2) decoder_dropout_p (float): The dropout probability of decoder. (default: 0.2) bidirectional (bool): If True, becomes a bidirectional encoders (default: True) rnn_type (str): Type of rnn cell (rnn, lstm, gru) (default: lstm) output_dim (int): dimension of model output. (default: 512) optimizer (str): Optimizer for training. (default: adam)